This tutorial will guide you through data processing, analysis and visualization of your MRI data using mrTools.
After completing the tutorial you should be able to use mrTools to…
…examine raw data (epi and T1 weighted anatomies)
…determine whether motion-compensation worked
…examine the alignment between different data sets
…examine gray-matter / white-matter segmentations made by FreeSurfer and view surfaces
…find the calcarine sulcus and make flat patches of visual cortex
…average data and run pRF analyses
…determine the location of V1
…concatenate data and run an event-related analysis
You will need to download the git repositories for mrTools and mgl and make sure they are in your Matlab path. Do the following (you should make sure to specify the full pathname to mrTools and mgl in the commands below if they are not in your current directory)
addpath(genpath('mrTools')); addpath(genpath('mgl'));
If you are running the latest Matlab / MacOS version there is some possibility you will run into problems with compiled files (.mexmaci64) not being recognized and causing problems. For this tutorial, we do not need to use most of these files, so it should not be a big problem. You can check to see if you have a probelm by running the following:
>> mydisp('test') test>>
If instead of printing out test (without a newline) it crashes, then you have two choices. Easiest is to just remove the precompiled file:
>> delete(which('mydisp')) >> which mydisp /Users/justin/proj/mgl/utils/mydisp.m
That will remove the compiled file and allow the m file to run instead which should work on any system.
Your other option is to recompile mydisp.c. You will need to have XCode, Apple's developers tools which has various compilers and other tools, installed and then you can do the following:
mglMake myDisp.c
mlrMake
No matter what kind of experiment you do - you always want to look at the data you collect in as raw a format as you can. With neuroimaging this is even more important, because often data are processed with complex multi-step analyses and you can lose track of what features of your data give a result and get into real trouble. Lukily with imaging data, you can just look at your data, cause, well, it's a bunch of images. So let's do that first!
First, start in matlab by cd'ing to your data directory and run the program mrLoadRet. It will bring up a viewer. First thing to note is the “Group” drop-down at the top-left. We store data in different “groups” (folders on the harddrive) depending on what stage of analysis the data is in. We want to look at the “Raw” group - so:
Step 1: Select the “Raw” group
In MRI measurements there are different images made with different contrasts (for example, T1 or T2 or T2*-weighted images). For this purpose, just know that a typical anatomical image that you might see is “T1-weighted” where gray matter is gray, white matter is white and cerebral spinal fluid is black. You can take a look at that image by loading it as a “Base Anatomy”. A note here is that we can display summary statistics and data on many different kinds of base anatomies (we will see surfaces and flat-maps in a second). When we do so, we are just doing this for visualization - we transform the statistics on the fly to display on these base anatomies - we always keep the raw data in its original format (thereby avoiding messing with the original data too much by resampling and interpolation only when it is necessary to visualize). We always collect a T1 weighted anatomical image every session - this is used for visualization and for alignment with a canonical anatomy and surfaces (as we will see below).
Step 2: Load the T1 anatomy Go to the menu item: File/Base Anatomy/Load and select the T1 weighted anatomy. Should be called something like “anat01_11954_2_1_T1w_9mm_sag.nii”
You should see the name of the anatomy appear in the Base drop-down and a picture of the brain appear. A few fun things to do here:
Step 3: Examine T1 anatomy Click on the “Multi” radio button. You should see 4 different views of the brain. If you click (make sure that Plots/Interrogate Overlay is unchecked) in any of the views (except the bottom-right one), it will update the other slices to show you that location in the brain. Explore!
Step 4: 3D anatomy view Another view that is sort of fun is to select the 3D radio button and move the 3D slice view around. Not sure that you learn anything different, but it looks kinda cool.
Ok. Cool. So, what about the “functional” data? Well, those are collected at a much faster-rate, typically with a fast imaging sequence known as EPI (Echo Planar Imaging), and have a T2*-weighted contrast. These are sensitive to the paramagnetism of deoxygenated hemoglobin (they get darker where there is more deoxygenated hemoglobin), let's take a look at these.
Step 5: Look at raw EPI images Go to Plots/Display EPI Images. You should see a viewer, where you can look at individual images or slices. Look at individual images. Click displayAllSlices and Animate over frames. This will show you a movie of all the images of the brain.
Note the different image contrast. What color is gray-matter? What color is white matter? How good is the resolution compared to the T1 we just looked at? Do you see evidence of head-motion? Do you see evidence of BOLD contrast (consider that large changes in BOLD are about a couple of % - do you think you could see them)?
We typically run an algorithm that tries to use image-based registration (shifting and rotating the image of the brain to make all frames match as best as possible) to compensate for any subject head-motion. This usually helps, but sometimes it can mess things up if it doesn't work properly. So we like to check to see what it did. This should already have been run on your data (it's available from the Analysis menu, if you're interested), so let's just go take a look at it.
It is often helpful to examine our data on renderings of the cortical surface. This is done by processing a high-resolution T1 image of the brain to find the borders between the gray and white matter and the surface of the brain. Then you can segment out just the cortical surface. This is useful since you can throw out data from outside the brain, or within the white matter and visualize on the cortical topography. FreeSurfer is a useful set of tools that can do this automatically on a T1 image (you type one command and then come back in 24 hours or so and it has segmented the brain and made triangulated surfaces. Check here if you wish to know more about surfaces and flat maps. In this section we'll take a look at surfaces and make some flat maps for visualization.
First, we need an alignment. Note that we took the T1 image of the brain (which we will call the “canonical anatomy”) on a different day. Since you never put your head back into exactly the same place in the magnet, we need to find the transformation between where your head was on the day that we collected the canonical and the day of the experimental session. To do that, we use the T1 weighted image we looked at before, which we call the “session anatomy”. For more about this process see here.
We are going to use a tool called mrAlign to make an alignment to align the canonical and the session anatomy.
You can skip the following steps if the alignment has already been run. But if it hasn't, here's what you need to do:
Ok. Now we are aligned to the canonical anatomy, we are ready to import a surface and take a look at it. You can skip these steps if the surface has already been imported, and instead select the surface or flat map in the Base dropdown menu, and then inspect the surface / flat map by going to plots/Surface Viewer or plots/Flat Viewer.
Often it is convenient to look at data on a flattened piece of cortex (makes it easier to view V1). Let's make flat patches around the calcarine sulcus. Again, if a flat patch is already made, then you can skip these steps and instead select the flat patch in the Base dropdown menu and inspect the Base by going to Plots/Flat Viewer.
Ok. Now we're ready to look at the pRF analysis.
Now we will run an event-related analysis on your paradigm to see what you got!
Make sure to save some figures (easiest is to do shift-command-4 then space and click on the viewer). These can be used for your presentation!
You may want to get the data from the eventRelatedPlot so that you can do your own analysis. For example, replot the time series. Or, to run classification analysis using the region of interest you have define, like V1. To do that, click the “Save to Desktop” button in the eventRelated plot figure:
(eventRelatedPlot) Saving data from figure to variable eventRelated >> eventRelated eventRelated = d: [1x1 struct] voxel: [1x1 struct] stimvol: {1x3 cell} roi: [1x1 struct]
This has data from both the voxel and the region of interest (if you had clicked on an ROI). The fields in the data structure should be pretty straight-forward. For the voxel it looks like this:
>> eventRelated.voxel ans = vox: [61 64 15] time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75] ehdr: [3x17 single] ehdrste: [3x17 single] r2: 0.3391862 tSeries: [760x1 double] tSeriesByStim: {[19x17 double] [17x17 double] [18x17 double]}
Vox is where the voxel is in the data, ehdr contains the estimated hemodynamic response function for each condition (in this case 3 conditions) - i.e. your event-related analysis output for this voxel. ehdrste contains the standard error of those estimates computed over repetitions of the stimulus. r2 is the variance accounted for of the fit. tSeries contains the tSeries where this was computed from. tSeriesByStim is that same time series sorted by condition (i.e. in this case there were 3 conditions, with 19, 17 and 18 repetitions each of these 3 conditions. The tSeriesByStim has the time series broken up so that it contains the response from the start of each trial out for 17 volumes - the length of the computed event-related hdr). Note that these sorted tSeries do not deal with response overlaps - it's just the raw time series after each event. It also will contain nans in the time series for events which don't have complete data (i.e. the scan ended before the response was finished).
If you have a region of interest, it will contain similar fields, looking like this:
>> eventRelated.roi ans = scm: [760x51 double] hdrlen: 17 nhdr: 3 dim: [2 2 2 760] volumes: [1x760 double] ehdr: [3x17 double] ehdrste: [3x17 double] r2: 0.60908429155102 time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75] covar: [51x51 double] roiName: 'lV4' roi: [1x1 struct] meanTimecourse: [1x760 double]
If you want the time series, it will be in the field roi:
>> eventRelated.roi.roi ans = branchNum: [] color: 'black' coords: [4x636 double] createdBy: '' createdFromSession: 's037120180521' createdOnBase: 's0371_left_occipital' date: '23-May-2018 16:26:13' displayOnBase: 's0371_left_occipital' name: 'lV4' notes: '' sformCode: 1 subjectID: [] viewType: 'Volume' vol2mag: [4x4 double] vol2tal: [] voxelSize: [1 1 1] xform: [4x4 double] scanCoords: [3x46 double] scanNum: 2 groupNum: 4 scan2roi: [4x4 double] n: 46 tSeries: [46x760 double] tSeriesByStim: {[19x46x17 double] [17x46x17 double] [18x46x17 double]}
Same format as the voxel, except you will see that tSeriesByStim has dimensions numRepeats x numVoxels x numTimePoints - where in this case the roi had 46 voxels.
If you want to analyze the behavioral data, you can get this from the workspace for the scan the mrLoadRet viewer is currently displaying (i.e. go to the group, scan in the GUI that you want the data for) and do the following:
>> stimfile = viewGet(getMLRView,'stimfile') stimfile = [1x1 struct] [1x1 struct] [1x1 struct]
viewGet is a function that can get various things about the scan you are viewing (type viewGet without any arguments to see all the things it can get), getMLRView accesses the current view being displayed and stimfile are the files that we save about your experiment. Note that it returned an array of 3 of these for this particular experiment since there are 3 scans that were concatenated together, each having its own stimulus file. You can convert these into a format that is easier to parse by running getTaskParameters:
>> e = getTaskParameters(stimfile{1}) e = nTrials: 20 trialVolume: [1 14 26 39 53 69 85 100 113 129 145 160 175 191 206 220 235 251 267 NaN] trialTime: [1x20 double] trialTicknum: [1909 3078 4157 5326 6585 8024 9463 10812 11981 13420 14860 16210 17559 18998 20348 21607 22956 24392 25832 31319] trials: [1x20 struct] blockNum: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20] blockTrialNum: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] response: [1 1 1 1 2 1 1 1 1 2 2 1 1 1 2 2 1 1 1 NaN] reactionTime: [1x20 double] originalTaskParameter: [1x1 struct] originalRandVars: [1x1 struct] responseVolume: [10 23 35 48 62 78 94 109 122 138 154 169 184 200 215 229 244 260 276 NaN] responseTimeRaw: [1x20 double] randVars: [1x1 struct] parameter: [1x1 struct] >> e.parameter ans = numJitter: [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2] >> e.randVars ans = match: [1 1 1 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0] orientation: {1x20 cell} orientationJitter: {1x20 cell} baseOrientation: {1x20 cell}
Note that variables that were used in your experiment will show up in either e.parameter or e.randVars depending on how they were coded. You can see the subject's response (which button was pressed) and what variables were set to as a function of trial. A nan would indicate no response. reactionTimes are in seconds relative to the beginning of the trial segment from which the subject could respond. Remember that if you want to compute something like percent correct, you probably want to do this by accumulating the data across all the scans (i.e. do a for loop over each stim file). To interpret the particular parameters and what constitutes a correct answer or not, consult with us about how your particular experiment was coded.
You may want to display “contrasts” between conditions - i.e. color the brain according to which condition gave a larger response. One way to do that is to run a GLM and choose particular contrasts to display. It's point and click, but a bit involved because there are a lot of relevant settings. See here.
A simpler way is to compute something yourself and try to overlay it using the function mrDispOverlay. Here's a quick example, which takes your event related analysis and computes the difference in response for two conditions and displays that.
% First get the estimated responses for all voxels out of the structure ehdr = eventRelated.d.ehdr; size(ehdr) ans = 92 92 54 3 17
The array is xDim x yDim x nSlices x nConditions x hdrlen in size (in this case a 92x92x54 epi image and 3 conditions computed for 17 time points). Let's just take the difference between the 1st and 2nd condition.
diff = nanmean(ehdr(:,:,:,1,:)-ehdr(:,:,:,2,:),5);
Note that I use nanmean since some timepoints will have nans in them if the motion comp has had to interpolate from outside the volume. Also, I'm just averaging across the whole response - you could try different things.
mrDispOverlay(diff,2,'Concatenation',getMLRView);
Note that the 2 is for scan 2 and the group is 'Concatenation'. Once it is installed, it will show up in the viewer as the overlay mrDispOverlay.