====== mrTools Overview ======
mrTools provides a set of Matlab tools to analyze fMRI data. It can do basic analyses like correlation analyses used in retinotopy experiments, event-related, population RF and GLM analyses. It can display the results of analyses on inplane anatomies, flat maps and surfaces. It is designed to make it easy to write your own script and programs in Matlab to analyze your data.
There are tutorials available for doing [[:mrTools:tutorialsretinotopy|retinotopy]], [[:mrTools:tutorialsprf|pRF]], [[:mrTools:tutorialseventrelated|event-related analysis]], [[:mrTools:tutorialsclassify|classification]] and [[:mrTools:tutorialsglm_v2|GLM]].
{{:mrtools:ertutorial9.png?183x175|}}
{{:mrtools:surftutorial_viewer2.png?183x175|}}
{{:mrtools:retinotopytutorial_coranal8.png?183x175|}}
{{:mrtools:retinotopytutorial_coranal6.png?342x166|}}
{{:mrtools:ertutorial8.png?230x166|}}
{{ :mrtools:blank.gif?196x196 |}}
====== Download ======
You can access an up-to-date read-only archive of mrTools using [[http://subversion.apache.org//|subversion]]:
svn checkout https://cbi.nyu.edu/svn/mrTools/trunk mrTools
Once you have used checkout to get the repository, you can get updates whenever you want to by doing:
svn update
The current directory must be 'mrTools', when you want to update.
See the [[http://gru.brain.riken.jp/statsvn/mrTools/index.html|commit logs]] for what has changed recently.
We recommend using subversion (see [[grupub:svn|here]] for a quick primer) if at all possible (so that you can easily stay on top of the many improvements to the code we continue to make). Mac OS 10.5-7 comes with subversion preinstalled. The latest versions of Mac OS 10.8 (Mountain Lion) does **not have svn preinstalled**. You can install svn by installing the latest [[http://developer.apple.com/tools/xcode/index.html|Xcode]] (make sure that you have checked the option to install the command line tools to get svn - if you already have Xcode installed, but svn is not installed, go to Xcode > Preferences > Downloads > Command Line Tools > Install). You will also need to have Xcode installed if you need to recompile mgl. For non-mac users, you can install svn directly from the [[http://subversion.apache.org/|subversion website]]. If you are really unable to get subversion, you can download [[http://gru.brain.riken.jp/download/mrTools.tar.gz|mrTools.tar.gz]] which contains a current version of the code. It should just open by clicking on it. If not, from a command line, you should be able to do:
gunzip mrtools.tar.gz
tar xfv mrtools.tar
Once you have downloaded, see [[:mrTools:GettingStarted|Getting Started]] for installation instructions.
==== Switching from previous svn repository ====
If you have an old svn repository that you downloaded from %%http://yoyodyne.cns.nyu.edu/svn/mrTools/trunk%%, then you can switch to the new repository by cd'ing to your mrTools directory and running:
svn switch --relocate http://yoyodyne.cns.nyu.edu/svn/mrTools/trunk http://cbi.nyu.edu/svn/mrTools/trunk
==== Problems downloading with SVN ====
If you have problems downloading with svn because your institution has a proxy server, you might have to let subversion know about it. Have a look at [[mgl:beta#download_with_proxy-server]] for details on how to set this up.
====== Getting Started ======
===== Add mrTools to your matlab path =====
At the Matlab prompt, type
addpath(genpath('/folderWhereYouInstalled/mrTools'))
Where folderWhereYouInstalled should be replaced with the name of the folder in which you have downloaded the svn or cvs repository.
===== Quick overview =====
The best way to get started using MLR is to go through the [[:mrTools:tutorialsRetinotopy|Retinotopy Tutorial]]. In the tutorial you will learn the basic steps of setting up an MLR session which are as follows:
- Make the directory structure for MLR.
- **Anatomy** contains the inplane and any other anatomy file used by MLR.
- **Raw/TSeries** contains the epi images of each scan.
- **Etc** Contains supplementary files like stimulus timing files.
- Run [[:mrTools:functionReference#mrInit|mrInit]] or mrInitRet.
- Run [[:mrTools:mrAlign#Standard procedures for use with mrLoadRet|mrAlign]] and align your inplane anatomy to your "canonical" volume anatomy. Then align your epi images to the inplane anatomy and export to mrLoadRet-4.5.
- Run [[:mrTools:Analyses#motionCompensation|Motion Compensation]].
Note: To run mrTools, you need the following MATLAB toolboxes: Optimization, Statistics and Image Processing.
====== Compiling MEX files ======
MEX files (Matlab C code) for several functions are pre-compiled to improve performance -- you should not need to do anything if you want to use them on the following platforms: MacIntel (**.mexmaci**), PowerPC Mac (**.mexmac**) and 64-bit Linux (**.mexa64**). The following is a list of the mex files used in MLR:
* mrAlign/regHistogram.c
* mrUtilities/ImageProcessing/myCinterp3.c
* mrUtilities/MatlabUtilities/mrDisp.c
* mrUtilities/mrFlatMesh/assignToNearest.c
* mrUtilities/mrFlatMesh/dijkstra.cpp
If you have trouble with any of these functions, you may want to recompile them. This can be done by cd'ing to the directory where the function lives, and typing (e.g.):
mex mrDisp.c
Note that to compile these functions on a Mac you will need to have [[http://developer.apple.com/tools/|Xcode]] installed on your computer. On other platforms you will need to have the gcc compiler.
If compilation fails it may be because we use some c%%++%% syntax in the files (comments that begin with %%//%% and variables not declared at the top of functions) and gcc is interpreting the **.c** file as ANSI c. You can override the ANSI c compilation and force a c%%++%% compilation on most systems by doing:
mex CC=g++ mrDisp.c
If you need to re-compile any mex functions for a 64bit platform, here are some things you should watch out for:
* to compile MEX files (usually .c or .cpp) on Matlab version 7.3 and later, you should make sure you have GCC version 4.0 or later installed. Some *nix distributions still have GCC3.3, which will not work
* if the %%c/c++%% code includes sparse matrix operations (like e.g. dijkstra.cpp) you should be aware that //The Mathworks// has changed indeces into these matrices from type **int** to **mwIndex**
* to make use of larger array sizes allowed by 64 bits you need to compile the mex files with the option -largeArrayDims , e.g.
>> mex -largeArrayDims dijkstra.cpp
====== Printing this manual ======
If you wish to print out this manual, you can view it as a [[:mrTools:mrToolsSinglePage|single page]] and print from there.
====== Mailing List ======
If you want to subscribe to an email list which will make (very infrequent) announcements about major changes to mrTools, you can do so from [[http://cbi.nyu.edu/mailman/listinfo/mrtools-announce]]. Note that this is a moderated email list to which you will not be able to make posts, only receive announcements.
====== Retinotopy Tutorial ======
This tutorial shows you how to run a retinotopy analysis with mrLoadRet. It is a good place to start if you are just learning how to use mrLoadRet.
===== Overview =====
- Download the tutorial files
- The stimuli used in the experiment
- Make the directory structure for mrLoadRet and move the files into the correct locations
- Run mrInit to initialize the session
- Run the Motion Compensation
- Average wedge and ring scans together
- Run the correlation analysis
- Use mrAlign to align the 2D inplane anatomy to the 3D volume anatomy
- View the results on a flattened representation of occipital cortex
- Draw the visual area ROIs
===== 1. Download files =====
You can download the tutorial files from:
[[http://www.cns.nyu.edu/heegerlab/content/software/mrLoadRet/retinotopyTutorial.tar.gz|retinotopyTutorial.tar.gz]]
Note that this is a rather big file, approximately 300MB.
The files are provided as a tar/zip file. In Mac OS X you should be able to just click to open the files in the Finder. Otherwise, you can go to a terminal and do:
gunzip retinotopyTutorial.tar.gz
tar xvf retinotopyTutorial.tar
We provide you with a set of 10 functional scans, an inplane anatomy file, a high-resolution T1-weighted volume anatomy used for alignment and a flat map of the right occipital cortex to visualize the results on.
The functional scans include 6 rotating wedge runs (3 CW and 3 CCW rotations) and 4 expanding and contracting ring runs (2 expanding and 2 contracting). Each run is 168 volumes long (a volume was acquired every 1.5 seconds). The first 8 volumes will be thrown out to allow hemodynamic response and longitudinal magnetization to reach steady state. The data has already been motion compensated (since this step takes quite a bit of time to run).
From here, all the directions will be for commands to run under Matlab. It is assumed that you have properly set up mrTools using the directions under [[:mrTools:GettingStarted|Getting Started]].
===== 2. Stimuli =====
The visual stimuli were run using the mglDoRetinotopy command in the mgl library. A CCW wedge run looks something like the following (though the animating gif probably doesn't play at the right speed -- each cycle should take 24 seconds).
{{:mrtools:wedges.gif|}}
The rings stimuli look something like this:
{{:mrtools:rings.gif|}}
===== 3. mrLoadRet directory structure =====
Now that you have the raw image files, you need to make the directory structure for mrLoadRet and copy your files into the correct directories. (Note that this does not have to be done in Matlab, it can be done from the command line or from the Finder or from a shell script that is customized for the file types at your site).
- Start up Matlab and change directories into the downloaded tutorial data directory
cd retinotopyTutorial
- Now make a directory for the anatomy files and move the anatomies into that directory.
mkdir Anatomy
movefile inplane.* Anatomy
movefile volume.* Anatomy
movefile right_occipital.* Anatomy
- Now make the directory to hold your raw time series (i.e. the epi scans with your data), and move the scan files into that directory
mkdir Raw
mkdir Raw/TSeries
movefile Scan* Raw/TSeries
You should now have two top level directories (Anatomy and Raw) and all the files should have been moved into one of the sub-directories:
>> ls
Anatomy/ Raw/
===== 4. mrInit =====
Now you are ready to run mrInit which will setup your mrLoadRet session:
mrInit
You will first see a dialog like the following. Go ahead and fill in the fields to match what is shown here. This is information about the scan, it is used only for documentation, so you will usually fill in the fields appropriately for the scan you are running. Also, if you are using a different magnet and coil than the one at NYU, you can see the section [[:mrTools:functionreference#mrInit|mrInit]] on how to customize this dialog to your site. Finally, you may always change this information later, from the mrLoadRet GUI. When you are done, click OK.
{{:mrtools:retinotopytutorial_mrinit1.png|}}
Now you will get a dialog to set the parameters for each scan in the session. For these we are going to give a name to each scan (either CW, CCW, Expanding, or Contracting), and set the number of junkFrames to 8. The junkFrames mean to throw out the first 8 frames of the scan, we do this because we always take an extra half cycle of the stimulus (16 volumes per cycle), to allow both the longitudanal magnetization and hemodynamic response to reach steady-state. Let's start by setting all the scans to have junkFrames of 8. We can do this by putting 8 into the junkFrames field and then pressing "Copy parameters":
{{:mrtools:retinotopytutorial_mrinit2.png|}}
Select all the rest of the scans. This is the quickest way to copy the junkFrames parameter to all of those scans. Press Ok.
{{:mrtools:retinotopytutorial_mrinit3.png|}}
Now, we will add a description string for each scan. For the first scan enter CW. Then press the right arrow next to the scanNum at the top of the dialog to get to the next scan. After you press the right arrow, and you will be editing information for scan 2. Set the description string to CCW and the dialog should look like the following:
{{:mrtools:retinotopytutorial_mrinit4.png|}}
Then go through each scan, by pressing the right arrow next to the scanNum, and fill in the correct description. The description is just a name to help you remember what the scan was and could be anything you like (and can later be changed in the mrLoadRet GUI if need be). For these scans, just add either CW/CCW/Expanding/Contracting depending on what the scan .img filename is. When you are done, all scans should have a name and all scans should have junkFrames set to 8. Press OK.
You will get a message that asks if you want to append any notes. Answer no.
You should now be done with mrInit. If you want to check your work, you can use the function groupInfo to confirm that everything got setup correctly. For example, groupInfo alone will give you information about the session that you just entered and should look something like this:
>> groupInfo
========================================
homeDir: /Volumes/drobo/data/anatomy/Retinotopy/
description: Retinotopy Tutorial
operator: Laszlo Jamf subject: Slothrop
magnet: Allegra 3T coil: Siemens birdcage protocol: cbi_ep2d_bold:
========================================
1: Raw (10 scans) 363.0M
And, the groupInfo on the Raw group should give the following:
>> groupInfo('Raw')
1: CW
Filename: Scan01_CW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
2: CCW
Filename: Scan02_CCW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
3: Expanding
Filename: Scan03_EXP.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
4: Contracting
Filename: Scan04_CON.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
5: CW
Filename: Scan05_CW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
6: CCW
Filename: Scan06_CCW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
7: Expanding
Filename: Scan07_EXP.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
8: Contracting
Filename: Scan08_CON.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
9: CW
Filename: Scan09_CW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
10: CCW
Filename: Scan10_CCW.img GroupName: Raw
junkFrames=[8] totalJunkedFrames=[0]
voxelSize=[3.0 3.0 3.0] TR=1.5000 Dims: [64 64 27] Volumes=168
The most important thing is that you got the junkFrames right. If you got something wrong, then you can always change it in the next step, once you run the mrLoadRet viewer. To make changes to the session, use the menu item Edit/Edit mrSession and to change the group information, use Edit/Group/Edit Group
===== 5. Motion Compensation =====
You are now ready to start the mrLoadRet viewer:
mrLoadRet
Will bring up a window that looks like the following. Make sure to click on the Coronal/Sagital/Axial radio buttons and play with the slice slider to view whatever slice of the anatomy you want to see:
{{:mrtools:retinotopytutorial_mainviewer1.png?500x500|}}
You may get a couple of warnings about the sform not being set. Click Ok to cancel these. They are just warning you that you have not done an alignment yet, which we will do later in the tutorial. If you prefer not to get pop-up warnings and wait bars and instead would like to have these messages only printed out to the Matlab command window, you should set the verbose preference (Edit->Preferences menu) to No (remember to always read the messages printed to the command window if you choose this setting!)
From here, the usual first step is to do motion compensation on the scan. For the purposes of this tutorial, we are going to skip motion compensation (the files provided have already been motion compensated). For more information on how to run motion compensation, please see [[:mrTools:Analyses#Motion Compensation]]. If we had done motion compensation, then all the motion compensated scans would have been moved into a new group called MotionComp and we would make the Averages from the MotionComp group. Instead, we will simply make the Average from the Raw group.
===== 6. Average time series =====
Now, we will average together time series. There are two purposes for doing this. One is to improve the SNR. The other is to remove the effect of the hemodynamic lag. Because we run the rings and the wedges in both directions (e.g. CW and CCW), we can reverse the order of one set of scans (say the CCW) when we average with the CW scans. Now, the hemodynamic lag will make the response lag a few seconds behind the stimulus. We can compensate somewhat for it, by shifting our time series by a few seconds. However, we don't know exactly how long that lag is, but because we are reversing and averaging together, the residual lags we haven't compensated for by shifting will cancel each other out and we get a response that is of average phase between the two scans.
Note that this tutorial is written with a shift forward by 2 frames - the exact shift you need depends on the hemodynamic lag and the difference in starting phase between forward and reverse runs and so shifting forward by 2 frames may not work on other data you collect. It is worth testing out different shifts to see which one gives you the best retinotopy maps. A general guideline is that shifting by -3 to -5 frames (note the negative signs) rather than the +2 in this tutorial works for most cases.
So, to average the wedge scans together, select Analysis/Average Time Series. It should look something like this:
{{:mrtools:retinotopytutorial_average1.png|}}
You can ignore the interpolation method and base scan settings. These are only important if you are averaging together scans that come from different slice placements.
The important settings for us is whether to include the scan in the average, the shift and the reverse flag. We will want to include all CW and CCW scans, set their shift to 2 volumes (approximate compensation for hemodynamic lag) and then reverse all the CCW scans. So, start by setting shift to 2 in the dialog. Now click the right arrow under scan and set Scan 2 to have a shift of 2 and the reverse flag set:
{{:mrtools:retinotopytutorial_average2.png|}}
Next, go to scan 3 and unclick it from the average.
{{:mrtools:retinotopytutorial_average3.png|}}
And, go through in the same way, setting all CW scans to be included with a shift of 2, all CCW to be included with a shift of 2 and the reverse flag set, and all other scans to not be included.
Click OK. And mrLoadRet will begin to average the scans together.
When it is done, go back and do the same thing again, but this time average together expanding and contracting rings scans setting the shift to 2 volumes and setting all contracting scans to be reversed.
Now, on the main viewer, change the Group drop down to Averages.
It is useful to change the names of the scans we just made to something easier to remember what they are. To do this, go to Edit/Group/Edit Group. The first scan should have the name
Average from Raw of scans: 1 2 5 6 9 10
Change it to
Wedges: Average from Raw of scans: 1 2 5 6 9 10
Change the second scan to
Rings: Average from Raw of scans: 3 4 7 8
And click OK.
To confirm that you did the averaging right, select Edit/Scan/Info, and you will get a dialog like this:
{{:mrtools:retinotopytutorial_scanInfo1.png?660x320|}}
Pay close attention to the fields scanList, shiftList and reverseList. They tell you how you did your averaging and they should look identical to the above.
You can also check the rings scan, by sliding the Scan slider on the main viewer to scan 2 and selecting Edit/Scan/Info, and it should look like this:
{{:mrtools:retinotopytutorial_scanInfo2.png?660x320|}}
===== 7. Correlation analysis =====
Now we are ready to run the correlation analysis. Choose the menu item Analysis/Correlation Analysis. For both scans, you will select (re)compute and set the number of cycles to 10. It should look like the following:
{{:mrtools:retinotopytutorial_coranal.png|}}
When it has finished running, you should see something like this:
{{:mrtools:retinotopytutorial_coranal4.png?600x600|}}
Now, let's try to examine single voxel responses on the map. First, move the Slice slider to Slice 17. Then choose the co Overlay (Coherence). Choose an Overlay min 0f 0.4 so that you are only looking at voxels with a high correlation value. Then turn on the voxel interrogator, by going to Plots/Interrogate Overlay. You should see something like this:
{{:mrtools:retinotopytutorial_coranal5.png?600x600|}}
Note the Interrogator drop down in the lower left of the viewer and that it is selected to corAnalPlot. When you move your mouse over the image it should now turn into a giant cross-hairs. Choose one of the voxels (the example below is voxel [27 28 17]) with the darkest red color and click. After a moment, you should see a figure like this:
{{:mrtools:retinotopytutorial_coranal6.png?600x290|}}
Looks very well modulated at the stimulus frequency! Contrast-to-noise ratio (i.e. the magnitude of the response at the task frequency (marked in red in the lower right graph, divided by the mean of the points marked in green) is over a 100-to-1.
===== 8. Run Alignment =====
Now we will run the alignment so that we can view the data on a flat surface. Start up mrAlign:
mrAlign
From File/Load Destination (Volume) go and find the volume anatomy (Anatomy/volume.img) and load it.
From the File/Load Source (Inplane) go and find the inplane anatomy (Anatomy/inplane.img) and load it.
Under Compute Alignment, choose Initialize Alignment from Header.
Now set the slice slider to somewhere in the middle and click Transpose and Flip and you should see something like this:
{{:mrtools:retinotopytutorial_mralign1.png?406x462|}}
You can run a full alignment by going into the Compute Alignment menu, but for this example the Alignment from Header has been cooked to be perfect. (Normally, this won't be the case, check the [[:mrtools:mralign|mrAlign documentation]] for more info on all the parameters you can set for real alignments).
So, instead just go to File/Save Alignment and save the alignment. This saves the alignment between your inplanes and the reference volume anatomy in the header (sform field) of the nifti inplane.img file.
Now we need to align the scans with this inplane. We can do that by first going to File/Reload Source as Destination to set the inplanes as the slices that we are aligning to.
Next File/Load Source (inplane), and go to Raw/TSeries and select any one of the Scan files. Shortly, a dialog will come up and you can select the mean image to display (since the epi images are a sequence of images, you have to choose either to show the mean or a particular time frame). Note that on some systems this dialog may pop-up *below* the mrAlign viewer. If this happens just move the mrAlign viewer until you see the dialog.
Now select Compute Alignment/Initialize Alignment from Header. Since the scan images were taken during the same session as the inplanes, the initial alignment in the header should be perfect (unless the subject moved their head). Click the coronal button, the flip button and select a middle slice and set the alpha to about 50%, and you should see something like this:
{{:mrtools:retinotopytutorial_mralign2.png?406x462|}}
Now you are ready to export the alignment to the mrLoadRet Session. Go to File/Export/Export to mrLoadRet-4.5 and click yes on the dialog that comes up, then you will be given the option of what groups to export to. You should export to all groups and press OK.
Quit out of mrAlign, you are done with alignment.
===== 9. Flatmap =====
Now, load the flat patch of the right occipital cortex as your base anatomy. Go to File/Base Anatomy/Load and load the file right_occipital.img. If you go back to viewing the ph overlay on Scan 1, you should see:
{{:mrtools:retinotopytutorial_coranal2.png?600x600|}}
This is the phase map for the wedges, so the color represents the polar angle of the visual field. We can switch to scan 2 and look at the eccentricity map as well. To do that switch the slider to Scan 2.
There are a few visualization settings that are also useful to play around with now. One, is to not show voxels that have low coherence with the stimulus. You can do that by choosing the co Overlay in the drop down menu on the main window, and then setting the Overlay min to say 0.3. Now switch back to the ph Overlay and you should see only voxels with high coherence with the stimulus. You can also change the colormap if you like, to do that go to Edit/Overlay/Edit Overlay and you will get a dialog like the following:
{{:mrtools:retinotopytutorial_editoverlay.png|}}
If you shiftCmap by 48, you should get a map that looks like this:
{{:mrtools:retinotopytutorial_coranal3.png?600x600|}}
===== 10. Defining visual fields =====
Now, looking at the right_occipital flat map, set the Overlay min to about 0.2 co value and switch back to the ph Overlay. The red/pink points are the lower vertical meridian and the greenish points are the upper vertical meridian. V1 covers the whole contralateral hemifield, so we start by marking off an ROI the spans these two meridians. Go to ROI/Create/Polygon and then give the ROI the name r_v1. Then place points to mark off the area shown below. When you are done marking off the area, double-click:
{{:mrtools:retinotopytutorial_coranal7.png?600x600|}}
Note that if you are unhappy with the ROI drawing tool (e.g. it is very slow), you can change how it works in Edit/Preference/roiPolygonMethod.
Now, you can draw r_v2d using the same method, by drawing from the lower vertical meridian to the horizontal meridian (yellowish color) and then r_v3d as the horizontal meridian to the lower vertical meridian. Then, similarly for the ventral parts of v2 and v3, you draw from the upper vertical meridian (green) to the horizontal meridian (yellow) and back again. Finally, we draw (though there is some controversy on this issue) V4 as the full hemifield map below v3v. Of course, there are many other visual areas that you can be able to define. For more information on other areas we routinely define, see [[http://www.jneurosci.org/cgi/content/full/26/51/13128|Larsson & Heeger (2006)]] and [[http://jn.physiology.org/cgi/content/full/94/2/1372|Schluppeck, Glimcher & Heeger (2005)]] and references therein. and It should look something like:
{{:mrtools:retinotopytutorial_coranal8.png?600x600|}}
If you want to make sure that ROIs do not overlay each other, you can use the combine tool in ROI/Combine. For instance if you want to make sure that r_v1 has no voxels from r_v2d you would set up the dialog like this:
{{:mrtools:retinotopytutorial_roiCombine.png|}}
and press Do combination.
Finally, you may want to extend your ROIs through the cortical depth. Note that by default you are looking at the midcortical depth (See Cortical Depth slider on the main viewer). If you want your ROIs to extend through the gray matter, you can ROI/Convert/Convert cortical depth. You want to Project through depth, and then starting at your current depth (0.5) copy the ROI through the whole gray matter. So you set referenceDepth to 0.5 and then starting at 0 (minDepth) go in increments of 0.1 (depthStep) until reaching the maxDepth of 1, adding voxels that are in the same location to the ROI. Be aware that if you extend too far with your ROI, you may start to cross sulcal boundaries, so use this tool carefully. Also, this tool will only convert voxels if they exist in the currently displayed flat map - so if you want to do rois on the other hemisphere, you will have to switch to the other hemispheres flatmap and convert those rois separately:
{{:mrtools:retinotopytutorial_corticalDepth.png|}}
When you are done, save your ROIs from the File/ROI/Save all menu item.
Ok! That's it. When you quit through the menu item, you might find that it takes a bit of time to close. This is because mrLoadRet is saving your settings into a file called mrLastView.mat so that when you start again everything will be back in the same place you left it. If you don't want to save your settings, you can always type mrQuit(0) at the command line and mrLoadRet will close without saving. Or, the next time you run, you can run mrLoadRet with mrLoadRet([]) and it will run without loading the mrLastView.mat file. Be aware that until you explicitly save ROIs through the File menu, the ROIs only exist in the mrLastView.mat. Your correlation analysis and the averaging on the other hand has already been saved to disk. You can always reload the Analysis from the File/Analysis/Load menu item.
One last thing that is worth mentioning here. If you don't like all the dialog box warnings, you should set Edit/Preferences/Verbose to "No". This will print all warnings and errors to the Matlab buffer, so be sure to check there to make sure you don't miss important messages. Also, if the code crashes at some point, it may put you into the Matlab debug mode, that looks like this:
K>>
If you see this happening, you should note what error occurred and then get out of debug mode by typing
K>> dbquit
Note that you can reduce the number of such occurrences by using the GUI in the way that it is intended. A large number of crashes occur because people click on the red button at the top the figure to close it rather than using the Quit/Close menu item or using a button on the GUI like OK/Cancel to close. When you click the red button to close a figure it does not allow the close code to run and the GUI can crash (this is mostly harmless, but can be annoying). You should be especially careful to use the File/Quit menu item from the main GUI to close MLR since that will guarantee that the next time you load the viewer, you will return to the same state of the viewer including all the same loaded Anatomies, ROIs and Analyses.
-jg.
====== Event Related Tutorial ======
This tutorial will show you how to run an event-related analysis. Note that the [[:mrTools:tutorialsRetinotopy|Retinotopy Tutorial]] is the best place to start for a full introduction to using mrLoadRet. This tutorial starts after the initial steps of running mrInit, the Motion Compensation and Alignment -- those steps are described in the retinotopy tutorial. The experiment that is being analyzed here is one in which the subject viewed randomly moving dots. On each trial, the dots moved coherently for a couple of seconds in one of three directions, and then went back to moving incoherently for several seconds. Thus there are three different trial types that will be deconvolved, for each of the three directions of motion, and we will look at the responses in MT.
===== Overview =====
- Download the tutorial files
- Concatenate scans together
- Set stimfiles
- Run event-related analysis
- Viewing the analysis
===== 1. Download =====
You can download the tutorial files from:
[[http://www.cns.nyu.edu/heegerlab/content/software/mrLoadRet/erTutorial.tar.gz|erTutorial.tar.gz]]
Note that this is a rather large file. Approximately 300MB.
The files are provided as a tar/zip file. In Mac OS X you should be able to just click to open the files in the Finder. Otherwise, you can go to a terminal and do:
gunzip erTutorial.tar.gz
tar xvf erTutorial.tar
We assume that you have mrTools running on your system.
===== 2. Concatenate =====
Start up Matlab and switch to the erTutorial directory you just downloaded, and run mrLoadRet:
cd erTutorial
mrLoadRet
We are going to run a concatenation. The purpose of concatenating scans together is first to make a single long scan out of a few shorter scans so that we can estimate the responses using all the data. Second, this step runs preprocessing steps on the data such as high-pass filtering and converting to percent signal change.
Make sure the Group tab is set to MotionComp and then Select Concatenate Time Series from the Analysis menu. You should see a dialog like the following:
{{:mrtools:ertutorial1.png|}}
We do not need to change anything on this dialog to have it run. For a description of the meaning of the parameters press help, or see the [[:mrTools:Analyses#Event Related Analysis|Event Related manual pages]].
Press OK.
You will see another dialog where you should select all the scans for concatenating:
{{:mrtools:ertutorial2.png?750x203|}}
Press OK. This will take a few minutes to run.
===== 3. Set stimfiles =====
We will now tell mrLoadRet which "stimfiles" to use for these scans. Stimfiles contain the information about the timing of events in the experiment that is used for the deconvolution analysis. Note that we set the stimfiles on the Raw TSeries (i.e. the original time series). This can be done before or after the concatenation is done. The concatenation scan has saved what original scans and from what group it came from. So, to find the correct stimfiles, mrLoadRet always checks back to those scans to see what stimfiles they are linked to. This way, you can always change how you have linked the stimfiles at any given time.
Switch the Group tab to Raw. Go to Scan 1. Now go to the Edit menu and select Edit/Scan/Link Stimfile. You should get a file choosing dialog like this:
{{:mrtools:ertutorial3.png|}}
Choose to link the first scan with the first stimfile "stimvol01.mat" and click Open.
Now, Set the scan slider to 2, and repeat this process, linking scan 2 with stimvol02.mat.
Link all the rest of the scans accordingly, (i.e. Scan 3 -> stimvol03, Scan 4 -> stimvol04 and Scan 5 -> stimvol05).
While this is a bit tedious, the process can be automated for a real experiment you run, and the mrInit dialog makes it easier to link all the stimfiles at one time when you first start your session.
Now, let's go check to make sure that you have linked the files correctly.
Switch to the Concatenation Group, and then choose the Edit/Scan/Info menu item.
You should see something like the following:
{{:mrtools:ertutorial4.png?600x286|}}
In particular, check that the stimFilenames are listed in sequential order. If they are not, switch back to the Raw group and fix it by re-linking the stimfiles again.
The stimfile format used here is very simple. It is a cell array where each element in the cell array is for a different trial type (in this case different directions of dot motion). Each element of the cell array consists of an array whose elements list the volume number that the stimulus occurred on. There is also a stimNames cell array that gives what the names of the trial types are. You can see what this looks like, by doing (from the matlab command line);
v = newView;
stimfile = viewGet(v,'stimfile',1,1);
and you should see that it looks like:
>> stimfile{1}
ans =
stimvol: {[21 38 60 75 96 104 150 156 181 195 223 256 285 300 305 332 364] [1x18 double] [1x17 double]}
stimNames: {'Direction: 35 deg' 'Direction: 135 deg' 'Direction: 255 deg'}
filetype: 'stimvol'
For a description on how to create your own stimfiles see the [[:mrTools:Analyses#Event Related Analysis|Event Related manual pages]].
===== 4. Event Related Analysis =====
Now, let's run the event-related analysis. Select Event-related analysis from the Analysis menu, and you should see the following dialog:
{{:mrtools:ertutorial5.png|}}
You should select "applyFiltering". This checkbox filters the columns of the event-related design matrix in the same way that the data have been filtered (i.e. high-pass filtering). Click OK.
Select the single scan in the next dialog box, and click OK.
You should now see:
{{:mrtools:ertutorial6.png|}}
This is the dialog which chooses some information about how to do the event-related analysis. The hdrlen is the length of the response to calculate in the deconvolution.
Click OK.
Running the analysis will take a few minutes. Note that if you are on a system that does not have very much memory, you can choose to have mrLoadRet break up the computation into smaller chunks. This is set in the Edit/Preferences dialog. The smaller the chunks the longer it will take to calculate, but use less memory.
Once the analysis is complete, set the overlay min to 0.2 and the Slice slider to 8, and you should something like the
following:
{{:mrtools:ertutorial9.png?600x600|}}
This is an r2 map of the event-related analysis. It tells you what proportion of the variance in each timecourse is accounted for by the average hemodynamic response See [[http://www.cell.com/neuron/retrieve/pii/S0896627305006100|Gardner et al., Neuron (2005) 47:607-20]], for more information.
To see what the responses look like at any individual voxel, turn on the interrogator (Plots/Interrogate Overlay) and you should see "eventRelatedPlot" in the text box in the bottom left (see above figure). Go ahead and move the cross-hairs over the voxel with the highest r2 value (white) and click. You should then see:
{{:mrtools:ertutorial7.png|}}
This shows the three deconvolved responses to the three different trial types. If you click on the button to Plot the time series, you will see:
{{:mrtools:ertutorial8.png|}}
This shows the time series at the top, with a vertical line every time one of the trials occurred. Note that the time course is in percent signal change, and has been set to have a mean of 1. This is what the concatenation code does. The mean is set to 1, so that if you divide by the mean again (to get percent signal change), nothing will change in the time series.
You can also look at average responses across an ROI. To do that, load the l_mt roi, by using File/ROI/Load. You should now see:
{{:mrtools:ertutorial10.png?600x600|}}
Click inside the l_mt roi and you will get:
{{:mrtools:ertutorial11.png|}}
The graph on the left shows the individual voxel you clicked on. The graph on the right shows the deconvolution averaged across all the voxels in the l_mt ROI that meet the r2 cutoff that you set in the Overlay Min. This display simply averages across the already computed deconvolved responses. If you want to get error bars, click the "Compute Error bars" button, and after a little bit, you should see:
{{:mrtools:ertutorial12.png|}}
What this does, is reloads all the time courses that meet the r2 cutoff set by the overlay min and then averages those time courses together into a single time course. It then recomputes the deconvolution analysis on this average time course. The error bars are now standard errors over the number of repeats of the stimulus. (Technically, they are computed as the residual variance distributed back to each time point in the deconvolved response according to the inverse of the covariance of the design matrix).
-jg.
====== Surfaces and Flat Maps Tutorial ======
This tutorial will show you how to import a surface segmentation into mrLoadRet so that you can view your data on a surface. It will also show you how to make a flat map from that surface.
===== Overview =====
- Download tutorial data
- Create a curvature file
- Load event-related analysis to view on inplane anatomy
- Import surface
- Create a flatmap
- Finding voxels in different anatomies
- Inflated surface
===== 1. Download =====
You can download the tutorial files from:
[[http://gru.brain.riken.jp/pub/surfTutorial.tar.gz|surfTutorial.tar.gz]]
The files are provided as a tar/zip file. In Mac OS X you should be able to just click to open the files in the Finder. Otherwise, you can go to a terminal and do:
gunzip surfTutorial.tar.gz
tar xvf surfTutorial.tar
We assume that you have mrTools running on your system. This is a paired down version of the same dataset as the Event-related Tutorial. To make it easier to download it does not include all the data files, so you cannot rerun analyses on this example. The data has already been motion corrected, aligned to the 3D volume using mrAlign, concatenated and the eventRelated analysis has been run.
===== 2. Create curvature =====
To view your data on a surface, mrLoadRet needs a segmentation. A segmentation consists of two surfaces, one that tells where the outer (pial) layer of the gray matter is and one that tells where the inner layer is (between the white matter and the gray matter). These files can be created by any segmentation tool as long as they are in .off format. We typically use [[http://surfer.nmr.mgh.harvard.edu|FreeSurfer]] (see [[:mrtools:howto#use_free_surfer_with_mlr|here]] for how to import) to generate segmentations. Jonas Larsson's [[http://www.pc.rhul.ac.uk/staff/J.Larsson/software.html|SurfRelax]] and Caret (though Caret only produces a mid-cortex surface and not the outer and inner surface as we like to use. see [[:mrtools:howto#use_caraet_with_mlr|here]] to import) can also be used.
Once you have a segmentation, mrLoadRet also needs a file which tells it how to color the surface. We use the curvature to color the surface. In this tutorial we have provided an outer and inner surface for the left cortex, but no curvature file. You thus need to create the curvature file:
First cd into the Segmentation folder:
cd Segmentation
You should see the following files:
>> ls
leftOuter.off volume.img
leftInner.off volume.hdr
Now generate the curvature from the inner surface
m = calcCurvature('leftInner','leftOuter');
And then save it in vff format:
saveVFF('leftCurvature.vff',m)
===== 3. Display event-related Analysis =====
First, we will start up mrLoadRet
cd surfTutorial
mrLoadRet
Then we will load the already computed event-related analysis and view it on the 2D inplane anatomy. Make sure you are on the Concatenation group and then from File/Analysis/Load, load erAnal/erAnal.mat. Set the Overlay Min to 0.15, the Slice to 8, and you should see (you might need to go to slice 15 and set Rotate to 90):
{{:mrtools:surftutorial_viewer1.png?600x600|}}
===== 4. Import surface =====
Now, we are ready to import the surface. Go to File/Base Anatomy/Import Surface, and find the outer surface file in the segmentation directory (i.e. Segmentation/leftOuter.off) and click Open.
You should now see a view of the surface like this:
{{:mrtools:surftutorial_import1.png|}}
With a set of controls that look like:
{{:mrtools:surftutorial_import2.png|}}
This dialog should have the correct settings to load your surface already, but it allows you to make changes and selections. The surfaces appear four times in the dialog. This is because there is a distinction between a display surface and a coords surface. The display surface is the surface that will be actually drawn. The coords surface specifies where the data for the displayed surface comes from. In our case these are one and the same, but if you were displaying on an inflated surface, for example, you would want the display surface to be the inflated surface and the coords surface to be the gray matter surface. The outer and inner surfaces are what will be displayed when the cortical depth slider (more later on this) is set to 1 and 0, respectively.
The other files you see here are the curvature file which is simply how to color the surface. The anatomy file is the 3D volume anatomy from which these surfaces were segmented. It is important to have this file since it contains the right transformation matrix that is needed to transform data on to the surface to be displayed.
At the top of the dialog, you will see a field called "whichSurface" this chooses which surface that is currently being displayed. This is there so that you can inspect all the surfaces to make sure you have chosen the right one. For example, set it to Inner (White matter) surface, and you should see:
{{:mrtools:surftutorial_import3.png|}}
If you set whichSurface to 3D Anatomy, you should see this:
{{:mrtools:surftutorial_import4.png|}}
The 3D Anatomy is particularly important to inspect. The yellow line shows you where the outer coords are being taken from. It should follow the outer surface of the gray matter. The data that will be displayed will always come from the location of where the yellow line is, if you set the cortical depth to 1. There is also a white line. This should follow the inner surface of the gray matter. It is where data will be interpolated from when you set the cortical depth to 0. When you set the cortical depth to an intermediate value (e.g. 0.7) it will interpolate the data from the appropriate depth between the inner and outer surface.
Again, it is a very good idea to look at this view carefully to understand how good your segmentation was.
When you are done, click OK.
In a moment, you should see in the main viewer, something like this:
{{:mrtools:surftutorial_viewer2.png?600x600|}}
You can use the rotate and tilt sliders to rotate and tilt the surface. You can also change the cortical depth that is being displayed. When you set the cortical depth to 1, you will see the outer surface:
{{:mrtools:surftutorial_viewer3.png?600x600|}}
When you set the cortical depth to 0, you should see the inner surface:
{{:mrtools:surftutorial_viewer4.png?600x600|}}
Note that if you ever want to go back to look at the files that created the surface, you can always go to Plots/Surface Viewer.
Now that you have imported the surface, you may want to save it in a format the MLR can read directly. First, you may want to change its name, which you can do by going to Edit/Base Anatomy/Edit Base Anatomy and then setting the baseAnatomy. Next, go and save by going to File/Base Anatomy/Save. This will save a file that can be directly read back into MLR without importing into the Anatomy directory. If you wanted to save to a different directory (for instance the volumeDirectory repository which you should set in Edit/Preferences volumeDirectory), then click on File/Base Anatomy/Save As... and it will bring up file save dialog which opens up to the Edit/Preferences volumeDirectory.
===== 5. Make a flat patch =====
Now that you have the surface loaded, it is easy to make a flat patch of any location you want.
Go to Plots/Interrogator Overlay and turn the interrogator on (by clicking). In the bottom left of the viewer, select "makeFlat" as the interrogator from the drop down list box, so that makeFlat should now appear in the text box at the bottom left of the viewer:
{{:mrtools:surftutorial_viewer5.png?600x600|}}
You can now click on any portion of the surface that you want to flatten. For instance, the patch of brightly colored voxels that you see in the mid-occipital cortex is MT. Click on that, and you should see a preview of the flat patch to be made that looks like:
{{:mrtools:surftutorial_makeFlat1.png|}}
And you will get a parameter dialog that looks like:
{{:mrtools:surftutorial_makeFlat2.png|}}
Like last time, you can look at the various surfaces, by changing "whichSurface". You can also change the way the patch is colored, for example, try "Current Overlay with Patch" in "patchColoring" and then you should be able to tell if the patch you are making covers the whole area of activated cortex that you are interested in. You can also change the radius of the patch to make it bigger or smaller.
The only parameter we need to change though is the name. Change the flatFileName "leftInner_Flat_52_71_105_Rad75.off" to "leftMT" and click OK>
You should then see the following dialog:
{{:mrtools:surftutorial_makeFlat3.png|}}
The flatRes is the "resolution" of the flat patch. The flat patch will be converted from a true surface into a 2D image, and the resolution refers to how high a resolution you want to represent that flat image with. 2 is usually a good number, but if the resulting flat patch looks pixellated, you may try a higher resolution. The threshold check box, will make it so that the curvature is threshold, making a two color flat patch. If you want to have the smooth curvature displayed instead, turn off the threshold.
flattenMethod selects which system to use to flatten the patch. We typically use mrFlatMesh, which is a modified version of the code that came from the [[http://white.stanford.edu/Software.php|Stanford Vista]] package developed by Brian Wandell's lab. If you are using Jonas Larsson's [[http://www.pc.rhul.ac.uk/staff/J.Larsson/software.html|SurfRelax]] tools and have those properly installed, you can choose SurfRelax and try that.
Select mrFlatMesh as the flattenMethod and click OK. After a few minutes, you should see:
{{:mrtools:surftutorial_viewer6.png?600x600|}}
Note that you may need to rotate the patch to get it in the same orientation. In general, you should try to orient your patches such that dorsal is up and ventral is down - the next section which shows you how to [[#finding_a_voxel|search for voxels]] is useful for this, particular in the "continuous mode" described below which will show you which part of the patch is at the top and bottom of the brain.
You are now done making your first flat patch.
If you want to go back to the view of the flat patch on the surface, you can select Plots/Flat Viewer. This is a useful thing to do, since you can also see how the patch is oriented and see how much distortion is in the patch.
For example, from the Flat Viewer, select "whichSurface" to be "Patch" and "patchColoring" to be "Stretched areas in Red". You should see something like:
{{:mrtools:surftutorial_patch1.png|}}
You may also want to see where your patch coordinates are coming from on the volume anatomy. Choose "patchColoring" to be "Uniform" and "whichSurface" to be "3D Anatomy" and you should see:
{{:mrtools:surftutorial_patch2.png|}}
The yellow and white are the same as before (your outer and inner surface boundaries). But, the Red and Pink parts are the outer and inner coords for your flat patch.
You can click OK to get out of the Flat Viewer.
You may also now want to change the baseName and save the flat patch to a native MLR format (nifti pair with associated matlab file) by following the directions above at the end of the section on [[#Import_surface|Importing a surface]]. Note that when you save it will save the default rotation and other parameters so that the next time you load it the flat patch will be oriented correctly.
===== 6. Finding a voxel =====
Some times it can be very confusing looking at a flat patch and deciding where a particular voxel of data came from. One way to figure that out is to use the "searchForVoxel" interrogator.
You should still have the interrogator on (Plots/Interrogator). Now select searchForVoxel in the drop down list box at the bottom left of the viewer, and "searchForVoxel" should now appear in the iterrogator text box. Now if you click on one of the activated voxels in the center of the patch, you should get a dialog like this:
{{:mrtools:surftutorial_searchForVoxel1.png|}}
Assuming continuousMode is checked, then as you move your mouse over the flat patch in the main viewer, you should see a little yellow dot move around in another figure window that pops up like this:
{{:mrtools:surftutorial_searchForVoxel3.png?610x247|}}
The little yellow dot should run around in there as you move the mouse over the flat in the main viewer indicating its location in the inplane anatomy. This can be very helpful in determining where points came from in the flat. If you have a full 3D canonical available (not in this tutorial), you could switch to that anatomy in the dialog box shown above and it will show you in that anatomy.
The following feature has been a bit superseded by the continuous mode (you can avoid doing it by closing the dialog box by clicking on the red x button at the top left), but if you choose "inplane" for the baseName, and set the color to purple (so that it will be easier to see) and click OK, then it should switch you in the main viewer to the inplane anatomy and it should look like this:
{{:mrtools:surftutorial_searchForVoxel2.png?600x600|}}
Notice the four purple little points that are drawn over one of the voxels on the left side. This is where the voxel you clicked on in the surface came from.
===== 7. Inflated surface =====
Often it is convenient to make an inflated surface to display your data on. This is easy to do by using the inflated surface that FreeSurfer creates. Go to File/Base Anatomy/Import Surface, find leftInflated.off from the Segmentation directory. Now you just have to set the outer and inner coords appropriately (obviously you don't want to use the coordinates of the inflated surface since those are in the wrong place and should be used for visualization only). Your dialog should look like this:
{{:mrtools:inflateddialog.png|}}
When you click ok, you should get a surface in the viewer that looks like this:
{{:mrtools:surftutorial_inflated.png?569x411|}}
-jg.
====== pRF Tutorial ======
This tutorial shows you how to run a pRF analysis (see [[http://www.sciencedirect.com/science/article/pii/S1053811907008269|Dumoulin and Wandell (2008)]] for details - also [[http://www.nature.com/nature/journal/v452/n7185/full/nature06713.html|Kay, Naselaris, Prenger and Gallant (2008)]]) with mrTools. The basic idea of a pRF analysis is to fit a "population receptive field" to each voxel. This is usually based on stimuli similar to a typical retinotopy experiment (rotating wedges, rings and moving bars). The responses to these stimuli are predicted by a very simple gaussian receptive field model. The gaussian RF has parameters of x,y position and standard deviation (receptive field size). Then for any give gaussian RF one can generate a model time series by simply multiplying the gaussian by the stimulus time-point by time-point and taking the sum across space. This gives a time series (i.e. the response of the receptive field to the stimulus). Convolving this with a standard hemodynamic function (or in some cases a parameterized one with fit parameters that one can adjust) produces a model time series. Then one finds the RF parameters that produces a model time series that most closely matches the actually measured time series for a voxel. This then is taken as the best population RF for that voxel. By doing this you can get the usual information that you get from a retinotopy experiment (the eccentricity and polar angle for the RF of each voxel), but also as a bonus the receptive field size as well. Plus you are not restricted to stimuli that produce a response that is sinusoidally modulated in time (like wedges and rings). You can in principle use any stimulus that causes response modulation (like bars or white noise).
You should have already run the [[:mrTools:tutorialsRetinotopy|retinotopy tutorial]] before going through this tutorial.
===== Overview =====
- Download the tutorial files
- The stimuli used in the experiment
- Do the classic retinotopy as a reference
- Average and concatenate appropriate scans
- Run pRF analysis on V1 bars
- Examining output of pRF analysis
- Constrained search
- Prefit
- Fitting hemodynamic response function
- Running a full analysis on V1
===== 1. Download files =====
You can download the tutorial files from:
[[http://gru.brain.riken.jp/pub/pRFTutorial.tar.gz|pRFTutorial.tar.gz]]
Note that this is a rather big file, approximately 300MB.
The files are provided as a tar/zip file. In Mac OS X you should be able to just click to open the files in the Finder. Otherwise, you can go to a terminal and do:
gunzip pRFTutorial.tar.gz
tar xvf pRFTutorial.tar
We provide you with a set of 10 functional scans, an inplane anatomy file, a flat map of the left occipital cortex and an inflated left occipital surface.
The functional scans include 4 rotating wedge runs (2 CW and 2 CCW rotations) and 2 expanding and contracting ring runs (1 expanding and 1 contracting). Each run is 168 volumes long (a volume was acquired every 1.54 seconds). The first 8 volumes will be thrown out to allow hemodynamic response and longitudinal magnetization to reach steady state for the correlation analysis, but for the pRF analysis all volumes may be kept. The data has already been motion compensated (since this step takes quite a bit of time to run).
From here, all the directions will be for commands to run under Matlab. It is assumed that you have properly set up mrTools using the directions under [[:mrTools:GettingStarted|Getting Started]].
Note that if you want to run this on your own subjects, you will need to run the stimulus program [[:mgl:overview|mglRetinotopy]].
===== 2. Stimuli and analysis =====
The visual stimuli are the familiar ones from [[mrtools:tutorialsretinotopy|retinotopy]] which are CW and CCW rotating 90 degree wedges and expanding and contracting rings. But, typically we also run bars stimuli in which a bar sweeps across the visual field in 8 different directions (its important that they sweep both forwards and backwards so that receptive field size and hemodynamic response width are not confounded. Also, we typically run the bars in a pseudo-random order but keep the same order fixed across repeated scans and average all of the data together). Another important point is that it is useful to turn off the stimulus (go to gray) for a cycle every now and then. This probes for very large receptive fields which may not be modulated by stimuli that move within the receptive field. The bars stimuli look something like this (note that the subject typically is fixating in the center on a fixation cross - not depicted - while doing a straight-forward fixation task):
{{:mrtools:bars.gif|}}
The pRF analysis is basically very simple. We hypothesize that each voxel has a gaussian shaped receptive field (the combined receptive field of the population of neurons which contribute to the response of the voxel). We then want to fit the parameters of this gaussian (x, y and standard deviation) such that the predicted response of the receptive field to the stimuli we presented best matches the actual measured time series. We make predicted responses, like the following:
{{:mrtools:prfbars1x400f.gif|}}
What you see in the above is a small receptive field in the top right with the stimulus running over it in different directions. The black trace below is the time-point by time-point multiplication of the stimulus and the receptive field summed over all of space (it is the predicted response of the receptive field). That gets convolved with a hemodynamic response function to make the predicted BOLD response in red.
If you move the receptive field to a different location, it predicts a different time series:
{{:mrtools:prfbars2x.gif|}}
So, to fit a voxels response we change the position and standard deviation of the receptive field until the predicted response (red) best matches (in a least-squares sense) the actual time series for the voxel (note the lag in the BOLD response compared to the black trace).
Here is an example of a fit as it proceeds to change the receptive field position and size to best match the predicted time series (red) to the measured time series for the voxel (black):
{{:mrtools:prffit.gif|}}
===== 3. Retinotopy analysis =====
We start by doing a quick retinotopy analysis. This will be used as a reference. We will be using the pRF and GLM_v2 plugins (GLM_V2 provides some better [[:mrtools:tutorialsglm_v2#changes_to_the_gui|user interface items]]). Make sure these are installed by doing:
>> mlrPlugin
and selecting from the dialog GLM_V2 and pRF. Note that if you have a mrLoadRet going already, you will need to restart it to have these settings take effect.
{{:mrtools:mlrpluginforprf.png|}}
Now, go ahead and start mrTools by cd'ing into the experiment directory and running:
>> mrLoadRet
The scans are all labelled and you should be able to do this analysis w/out looking at the cheat sheet below. Note that the best shift used for these scans was -2. So, go ahead and average together appropriate scans with the correct shift and flip and then run the correlation analysis!
++Cheat sheet|
\\
\\
Really can't figure it out? Hmm. Consider doing the [[:mrtools:tutorialsRetinotopy|retinotopy tutorial]] again. In brief, here are the steps you need to accomplish. \\
\\
**(1) Check what scan is what**. I like to do this from the command line by typing groupInfo('Raw') - you can also go to Edit/Group/Info. Note that typically we would need to run motionComp on these scans but for the tutorial sake this has already been done. GroupInfo will show you that Scan 1 and 5 were CCW, Scan 2 and 6 were CW and Scan 3 and 4 were expanding and contracting rings, respectively.\\
\\
**(2) Average together wedge scans**. You need to do this with the correct shift and reversing. For the wedge scans, go to the Analysis/Averages menu item. Now unclick "include in average" and press the "copy" button - this removes all scans from the average. Now click "include in average", set the shift to -2 and click reverse. Go to the next scan (2). Click "include in average", shift = -2, and make sure that reverse is **unclicked**. Goto scan 5, "include in average", shift = -2, reverse=clicked. Finally scan 6 and "include in average", shift = -2 and reverse=unclicked. Check your work in the dialog box and click OK. (Make sure that only scan 1,2,5 and 6 are included in the average) \\
\\
**(3) Average together expanding/contracting rings**. Same deal as wedges, average scan 3 and 4, time reversing one of them and shifting by -2 both. Make sure that only scan 3 and 4 are included and that all other scans are **not** included in the average. \\
\\
**(4) Check your work**. Switch to the Averages group (drop down menu on the top-left) and get scanInfo (⌘I) for the first scan. The "scanList" field should read [1 2 5 6], shiftList should be [-2 -2 -2 -2] and reverseList = [1 0 1 0]. Also, make sure that the scan has 160 frames total with 0 junk frames (totalJunkedFrames should show up with 8 indicating that 8 frames were junked in the averaging step). If any of these are not correct, delete the scan and try again. Check the same for the second scan. If everything is good, you should go to Edit/Group/Edit Group and prepend the word "wedges" to the description and "rings" to the other scan. This will help keep track of everything.\\
\\
** (5) Run correlation analysis** Go to Analysis and run the correlation analysis. Make sure "(Re-)compute" is checked for both scans and #cycles reads 10.\\
\\
** (6) Examine retinotopy** Switch the base to s017LeftOccLarget (the flat patch for the occipital cortex on the left) by going to the Base dropdown on the top left and select it (If for some reason it is not loaded, you could also go to File\Base Anatomy\load and load s017LeftOccLarget). Click on co in the Clipping overlays block and set its minimum to be 0.3 (to get rid of some of the voxels not actually driven by the stimulus).\\
\\
++
If all went well, your retinotopy should look something like this:
{{:mrtools:prfretinotopy.png?500x457|}}
Now you should draw ROIs for V1, V2, V3 and V3a (hV4 is a bit broken up in this particular scan). Again you should be able to do this based on what you learned in the Retinotopy tutorial. For our version check the following:
++Cheat sheet|
If you need help on drawing ROIs, check the Retinotopy tutorial [[:mrtools:tutorialsRetinotopy#defining_visual_fields|here]].
{{:mrtools:prfretinotopywithroi1.png?500x457|}}\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
{{:mrtools:prfretinotopywithroi2.png?500x457|}}\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
++
===== 4. Average and concatenate =====
In the next steps you will average and concatenate scans for the pRF analysis. Note that this is **different** from what you do with the retinotopy analysis in which you shift and flip and do other tricks to account for the hemodynamic lag. For pRF analysis you are going to fit a **model** of the response which has the hemodynamic response lag as part of it. So you don't want to adjust the timing of the scans at all. This is important. If your scan timing is not correct relative to the stimulus all bets are off. In particular, you do **not** want to shift or reverse scans in averages. The code does handle junkFrames appropriately, but we are going to turn those off too. Remember that if you remove frames then your model has to know you did this otherwise it will associate the wrong stimulus with the response. So, here goes, let's make averages of like scans without any messing with the scans.
- Turn off junk frames in the Raw group: You do this by going to the Raw group (Select from the group dropdown on the top left) and selecting Edit/Group/Edit Group and then go through and set Junk frames to 0 for each scan. This should automatically set the numFrames to 168. Click ok.
- Average together the CCW scans (note that you do **not** want to include CW and CCW scans together since they had very different stimuli with different responses). Go to averageTSeries, include scan 1 and scan 5. Make sure that shift is 0 and reverse in *not* clicked. Make the same type of averages for CW (scans 2 and 6), Expanding (scan 3), Contracting (scan 4) and Bars (scans 7-10). You should end up with 5 new scans in your averages groups. Note that for Expanding and Contracting you will only have one scan - that's ok. This is just so that you have all the scans averaged as needed in the Averages group.
- Check your work. Go To the Averages group. You should have a total of 7 scans now. Check the Edit/Scan/Info (⌘I) on each one of the scans that you just made to make sure that they are the average of the correct scans and that they do not have shift or reverse set. Also they should have no totalJunkedFrames or junkFrames and have a total of 168 volumes.
===== 5. Run pRF analysis on V1 bars =====
First make a very small ROI for testing. You can make the ROI anywhere, but inside V1 where the retinotopy produced good results would be a good place to start and for the sake of following through on the tutorial you may want to get a similar set of voxels as is shown below:
{{:mrtools:prfroi.png|}}
Now, from the Analysis menu select pRF. You should see a dialog box like this:
{{:mrtools:screen_shot_2013-07-18_at_12.33.26.png|}}
This sets the options for the pRF analysis. The help menu should give explanations for the options, so we will just cover a few things here that are essential to know. The pRF analysis needs to know the stimulus that was presented during the scan. The code retrieves this from the [[:mgl:taskreferencesavingdata|stim file]]. You can take a look at what it computes as the stimulus by clicking the Display stimulus button. For example, let's look at the stimulus for the 7th scan in Averages (the average of the bar stimulus scans). Set dispStimScan to 7 and click Display stimulus and you should see this (this may not work on a non-mac computer):
{{:mrtools:prfbarsstimulus.png?557x370|}}
What you are looking at is the stimulus with different cuts through a volume with axis x and y (image dimensions) and t (time in Z dimension). So, if you click the Z button you should see what the stimulus looks like animated volume by volume. Note that for our calculations we consider the stimulus to be changing at the same rate as the data acquisition. That is, the stimulus changes position each volume. We are ignoring all of the sliding bars that you see in the [[:mrtools:tutorialsprf#stimuli|stimulus shown above]]. A more sophisticated analysis might try to take into account every frame of the stimulus and use all of that to generate a pRF model, but here we just think of everything happening within the bar as contrast changes that stimulates a pRF and therefore the stimulus image is just showing areas in white where there was high image contrast and black where there was no image contrast. Click Close to close the stimulus view.
Next, lets take a look at the model hemodynamic response function. Click Display HDR:
{{:mrtools:prfhdr.png|}}
You can change parameters of this function by changing the parameters (timelag, tau and exponent). It is simply a gamma function commonly used as a simple hemodynamic response function. If you want an hemodynamic response function that has a negative undershoot, click on diffOfGamma and display again. Usually you don't need to adjust this function, but if you have different assumptions about what a "canonical" hemodynamic response function is you can change parameters to get what you want. Note that the basic fit uses this common hemodynamic function shape and convolves it with the model response time series, you can also fit a hemodynamic function at the same time as fitting the receptive field. This is set by changing the rfType to "gaussian-hdr". But, for now, let's stick with the basics. We'll get back to more advanced fitting in a little bit.
So, now let's run it. Make sure that "restrict" has the ROI you just created (otherwise it may take a very long time to compute - if you make a mistake and it is running too long, your only option is to hit ctrl-C in the command window, which will crash you out to the debugger, but won't have any other bad side effects). Click OK. Then select Scan 7 (the bars stimuli scan) from the dialog that pops up. At this point you may get a pop-up with the following:
{{:mrtools:prfmlrnumworkers.png|}}
Basically, the algorithm work much faster if you use the parallel toolbox. By clicking Yes (this option may not be available if you do not have access to the parallel toolbox), it will start a number of threads each on a different processor of your machine. This speeds up things since on each processor you can be fitting a different voxel simultaneously. So, generally it is worth saying Yes to this dialog. Note that there appears to be some issue as of this writing (6/18/2013, Mac OS 10.8) with java and the parallel toolbox (the program will crash in matlabpool) that can be fixed by a [[http://www.mathworks.com/support/bugreports/919688|patch]].
It should now be fitting the time series. This can take some time depending on how big the ROI is that you made. You can check the progress in the matlab window.
===== 6. Examining output of pRF analysis =====
When processing is finished, there should be 4 new overlays loaded (r2, polarAngle, eccentricity and rfHalfWidth). Switch to scan 7 and you should see an r2 map that looks something like this (note that if you click on a voxel within an ROI you may get a different plot - summary of the whole ROI - so either Hide all the ROIs (menu at left) or click on a voxel outside the ROI):
{{:mrtools:prfr2map.png|}}
This shows the amount of variance accounted for by the model fit. You can examine the fit by interrogating voxels with the pRFPlot interrogator. To do so, make sure the interrogators are running (Plots/Interrogate Overlay) and make sure that the interrogator pRFPlot is selected in the bottom left of the screen. You may want to make sure that you hide your ROI viewing since clicking on a voxel within the ROI will bring up a different display which shows the fits of all the voxels as [[#Running_a_full_analysis_on_V1|explained below]]. Now go click on a voxel. For example, if you click on the voxel [12 30 22] you should see the following:
{{:mrtools:prfinterrogator.png|}}
What you are looking at is the data in black with the model fit in red. The top right shows the model hemodynamic response function used (which should be the same for all voxels since we did not fit its parameters for this fit). Below that on the blue background is a display of the fitted RF. It appears on the right side of the screen which is a good sanity check since this is the left visual cortex. If you drag your mouse over the time series the gray bar should move. The gray bar is simply depicting the time window being displayed in the black/red images below. What those images are showing is the stimulus at that time (gray bar) with respect to the receptive field for that voxel (red).
Now let's go back and look at the various other overlays. If you click on the polarAngle overlay you should see the following:
{{:mrtools:prfpolar.png|}}
Note that you see a nice gradation as receptive fields are going from the lower right visual quadrant in green to the upper right visual quadrant in blue - just as is expected for the retinotopic ogranization of V1. You should click on individual voxels in green and in blue and be able to see in the interrogator window that the receptive fields are moving as expected. For example here is one from the green region which is more dorsal in V1 and thus responds to the lower visual field:
{{:mrtools:prfblue.png|}}
And here is one from the blue area which is more ventral and thus responds to the upper visual field:
{{:mrtools:prfgreen.png|}}
Next click on the eccentricity overlay and you should see the following:
{{:mrtools:prfeccentricity.png|}}
Again a nice gradation as expected from retinotopy. Lighter colors are more eccentric. If you click on voxels at various positions you should see the eccentricity of the RFs.
Finally, we can look at the rfHalfWidth. This is the size of the RF and is actually the std of the gaussian used to fit the receptive field. The map should look something like this:
{{:mrtools:prfhalfwidth.png|}}
Its a bit noisier in the sense that there isn't as clear a gradation as for eccentricity, but if you squint there is a tendency for RFs that are bigger (lighter colored) to be more eccentric as expected from the known physiology. Note that there are some voxels that are black suggesting very small RFs. These are likely not real but one failure mode of the algorithm. Clicking on one of them (e.g. [10 29 23]) you can see what happened:
{{:mrtools:prfbadvoxel.png|}}
What happened is that the algorithm found that it could make the RF really really small and then account for a few time points in the time series (the red spikes you see). Basically what is going on is that for a tiny RF the response are very infrequent and so the algorithm can fit noisy single points in the data. Note that the variance accounted for by this fit is not as good as other voxels, so you can go set the Clipping Overlay r2 to have a minimum of 0.4 (make sure to have "Clip across overlays" checked, then click on the r2 in the Clipping overlays and set the min field), then you will see that all these black voxels are thresholded out:
{{:mrtools:prfhalfwidth2.png|}}
===== 7. Constrained search =====
Next we will try and refit the population RF using different options for the fitting parameters to better understand what is going on.
We will explore how to set constraints on the search. For example, the voxels with rfHalfWidth very small are unlikely to be real, so it would be nice to set a constraint on the algorithm to avoid it finding such a small RF.
To do this you can recompute the fits for specific voxels - in particular, choose a voxel that came up black in the pRF analysis rfHalfWidth map. What you need to do is make sure that input focus is on the mrLoadRet window (click on the title bar for instance). Now using the interrogator function, you should click on a voxel //**while holding the shift key down**//. (Note that this may not work on a non-Apple machine - if clicking on an individual voxel while shift is held down does not work, then just re-run the whole pRF analysis from the Analysis menu).
{{:mrtools:prfblackvoxel.png|}}
Make sure you keep the shift key down until you see the pRF options dialog box come up:
{{:mrtools:prfoptions.png|}}
We can now set the constraints so that we don't allow fits with tiny receptive fields. To do this, we have to use the levenberg-marquardt algorithm instead of nelder-mead. This is because levenberg-marquardt is a non-linear minimization algorithm which takes constraints, but nelder-mead simplex method is an unconstrained search. Note that the default is to use nelder-mead because it doesn't seem to get stuck in local minimum as much as levenberg-marquardt. If you want to learn more about these algorithms books like "Numerical Recipes" are a standard reference. So, first we set the algorithm to levenberg-marquaredt. Next unclick "defaultConstraints". Also click "quickPrefit" (more about that in a minute) and click "OK". You should see the following constraints dialog come up.
{{:mrtools:prfconstraints.png|}}
You can set constraints on any of the parameters, but for this case, let's just set the minrfWidth to 0.5 to keep from fitting a very small RF. Set that and click OK. After a minute you should see a window appear in which you can see the progress of the fitting algorithm (it should up-date from time to time with the RF moving around and the red model responses changing).
{{:mrtools:prfrefit.png?490x345|}}
In this case, it settled on an RF slightly larger than the minimum (0.76) and still seems to have got the right location for the voxel (compare the x,y position with neighboring voxels in the map).
You may want to refit a few voxels and see how changing the constraints in various ways changes the fit that you get.
===== 8. Prefit =====
When doing nonlinear fits, the starting point of the algorithm often has a large impact on the optimal fit achieved. For example, if the initial estimate of the RF that you give the algorithm is very far away from ideal, the algorithm can get lost in the error space and stuck in a local minimum with parameters far from the global optimal. If you could tell the algorithm where to start - like that you should start in the contralateral visual field for instance, you could do better.
The way we deal with this problem is by computing a grid of model receptive fields at many different locations in the visual field with many different receptive field sizes. We then do a quick test - what we call "prefit" - to see which of these model responses is most highly correlated with the actual voxels responses. By choosing that RFs parameters as the starting point we could do a better fit.
But, how many model receptive fields to calculate? The more you calculate the longer the algorithm takes, but the better the starting estimate. So, there are a few options that you can play with here. Shift-click on a voxel to bring up the pRF parameters dialog box. quickPrefit only computes a small number of model responses. This is useful for testing things out quickly, but the problem is that you may not have sampled the space of RFs very well and so the algorithm later will get stuck in a local minimum. We note that this is particularly true with lavenberg-marquardt which seems to not move very far from the initial starting point. This is why we default to using the nelder-mead even though it cannot do constrained searches.
The other option to note is the "prefitOnly" option. What this does is compute the best RF just based on the prefit (no nonlinear search). It just matches the best model response with the voxel response and reports that as the fit. It is very quick and dirty and can give a pretty good idea of whether things are working or not, but the results will be quantized to only the RF locations and sizes that were computed.
===== 9. Fitting hemodynamic response function =====
Up to now we have been using a "canonical" hemodynamic response function. In some cases you may want to relax this assumption and actually try to fit the hemodynamic response. Note that when you do this there are more parameters in the model and lots of opportunities to get stuck in local minimum, so the fit gets harder to do. Nonetheless we can try to estimate the hemodynamic response function at the same time. Shift-click on a voxel with very high r2 like [10 31 20]. Now, in the options dialog box, change the rfModel to "gaussian-hdr". Keep everything else with the default values - in particular, probably best not to do a quickfit since we need good starting parameters. You should see a window like this:
{{:mrtools:prfgausshdrfit.png?885x416|}}
You can watch as the algorithm adjusts both the RF location and the shape of the hemodynamic response function to find the best fit. Here is in example of it doing the fit with the hemodynamic response:
{{:mrtools:prffithdr.gif|}}
===== 10. Running a full analysis on V1 =====
Ok. Now we are ready to try a bigger ROI for a more real test. To do this, we are going to now concatenate all of our various stimulus types together so that we can use all of the data that we have.
So, go to Analysis/Concatenate Time Series and concatenate scans 3-7 (make sure **not** to concatenate the scans 1 and 2 which were the ones used to make the standard retinotopy and have the response time series shifted and reversed). You can use the standard options for filtering and the like on the concatenate dialog box. The scans you select should look like this:
{{:mrtools:prfchoosescansforconcat.png|}}
Now, switch to the Concatenation group and select Analysis/pRF. Let's do a quick and dirty prefit only to see what things look like. Click "prefitOnly" and "quickPrefit" and make sure that the restrict ROI selected is for your V1 ROI.
{{:mrtools:prfoptionsquick.png|}}
This may take some time to run. Works best if you are using the parallel toolbox and you have lots of free processors! After some time it should complete and you get a rough polarAngle map that should look something like this:
{{:mrtools:prfquickpolarv1.png?500x500|}}
And the eccentricity map should look like this:
{{:mrtools:prfeccentricityquickv1.png?500x500|}}
Looks reasonable! If you have the processing power and the time to wait, you can try a real fit. Just run pRF from the analysis menu again, but this time use all the default options and click OK. Wait some time and you should get maps that look like the following. Things look generally smoother and cleaner with the exception of a few voxels here and there that are not fit well:
r2 map:
{{:mrtools:prfr2full.png?162x137|}}
Polar angle:
{{:mrtools:prfpolarfull.png?162x137|}}
Eccentricity:
{{:mrtools:prfeccentricityfull.png?162x137|}}
and rfHalfWidth:
{{:mrtools:prfhalfwidthfull.png?162x137|}}
If you click on a voxel (say [10 31 21] that has been well fit with the interrogator, then you should see that it is using the full time course across many experiments and doing a pretty good job of fitting all the data (compare red model curve to data in black). Notice that the overall r2 for this voxel is accounting for 80% of the variance. In individual scans within the cocnat (second row of title), we see that it is account from 85% to 93%.
{{:mrtools:prffitfull.png?750x370|}}
Finally, if you have the V1 ROI showing and click a voxel within the ROI:
{{:mrtools:prfwithinclick.png?485x465|}}
You can examine all the RFs within the ROI together. You should see a figure like the following:
{{:mrtools:prffullv1.png?475x261}}
Left shows all the RFs drawn on to the visual field - they all are in the contralateral (right) visual field and look like they do a reasonably good job of tiling that space. The plot of RF half width (std of gaussian receptive field) against eccentricity shows a nice linear relationship which is also what we expect in V1.
-jg.
====== Classification tutorial overview ======
This tutorial explains classification analysis with fMRI data using mrTools. Classification analysis can be used to examine distributed patterns of activity - often subtle responses that are hard to examine with other methods. For example, the responses to different orientations which might be distributed across the cortex (see: [[http://www.jneurosci.org/content/31/13/4792.long|Freeman et al]] and [[http://www.cell.com/neuron/abstract/S0896-6273(06)00586-1|Sasaki et al]] for how they are distributed). [[http://www.nature.com/neuro/journal/v8/n5/full/nn1444.html|Kamitani and Tong]] showed that you could use classification techniques to read-out the orientation representation. That is, they build classifiers on a subset of data collected as subjects viewed gratings of different orientations and then tested that classifier with a left-out set of data. They were able to show that they could correctly determine what stimulus orientation subjects had been looking at, just by looking at their fMRI measured brain activity. Here we will go through what you need to know to do these types of analysis. The tutorial has one section on basics about classification (not necessarily about fMRI) and another section which uses actual fMRI to do a very simple classification analysis (was a stimulus presented above or below the fixation point?).
You can download the data for the second part here. Its a big file, so it may take some time to download.
You can download the tutorial files from:
[[http://gru.brain.riken.jp/pub/classificationTutorial.tar.gz|classificationTutorial.tar.gz]]
Note that this is a rather big file, approximately 430MB.
The files are provided as a tar/zip file. In Mac OS X you should be able to just click to open the files in the Finder. Otherwise, you can go to a terminal and do:
gunzip classificationTutorial.tar.gz
tar xvf classificationTutorial.tar
You should familiarize yourself with the basics of data analysis in mrTools by doing the [[:mrTools:tutorialsRetinotopy|retinotopy tutorial]] and the [[:mrTools:tutorialsEventRelated|event related tutorial]] before going through this tutorial.
Finally, you will need to download and put the following matlab functions into your path:
svn checkout http://gru.brain.riken.jp/svn/matlab/classification classification
Some documentation for these functions can be found [[:mrtools:classificationtools|here]].
====== Classification Basics ======
===== Mean classifier =====
In this first part, we are just going to play around with different classifiers to see how they perform on data to get a feel for the strengths and weaknesses of different methods.
We begin by creating a set of test data. What we want is two sets of data from two different classes. For classification in the context of fMRI data you can think of these as responses generated from two different kinds of stimuli or different kinds of tasks etc. Each dimension is the response (usually the average over some period of time) of a different voxel. For the test data we will create and use data with only two dimensions so that it is easy to visualize and understand what is going on - meaning that it is like using fMRI data from only two voxels. Conceptually, everything should generalize to multiple voxels. But, beware of the [[http://en.wikipedia.org/wiki/Curse_of_dimensionality|"curse of dimensionality"]] in which adding dimensions increases the volume of the space that one has to search for solutions in - this can occasionally cause problems.
Ok. So let's create the simplest case for a classification. Two data sets that are separated from each other and should therefore be easily classifiable. We will define the points ourselves by using the program classifyTest:
[x1 x2] = classifyTest('type=none');
On the graph, click 20 or 30 points roughly in a circle in one location and then double-click the last point. Then make the second set of 20 or 30 points in a location slightly removed. It should look something like this:
{{:mrtools:classifygetpoints.png?337x303|}}
Ok. Now in x1 and x2 we should have the coordinates of all the points that we made. You can take a look at them:
>> x1
x1 =
-0.673963133640553 0.87280701754386
-0.763824884792627 0.68859649122807
...
>> x2
x2 =
0.383640552995392 0.013157894736842
0.197004608294931 -0.171052631578947
...
Only the first few points have been shown. If you want to use nearly exactly the same data as is shown in this tutorial. Copy and paste the lines in the following to your matlab buffer:
++Data to copy and paste|
\\
\\
x1 = [-0.67 0.87;-0.76 0.69;-0.81 0.36;-0.6 0.3;-0.55 0.46;-0.54 0.61;-0.31 0.44;-0.29 0.7;-0.46 0.85;-0.64 1.04;-0.79 0.94;-0.88 0.56;-0.71 0.47;-0.6 0.77;-0.36 0.91;-0.51 1.1;-0.44 0.63;-0.44 0.36;-0.18 0.56;-0.21 0.78;-0.28 1.07;-0.51 0.98;-0.85 0.75;-0.81 1.12;-0.94 0.46;-1.03 0.71;-0.93 0.9;-0.61 0.62;-0.58 1.15;-0.39 1.19;-0.13 0.93;-0.52 0.71];\\
\\
x2 = [0.38 0.01;0.2 -0.17;0.16 -0.47;0.27 -0.76;0.41 -0.72;0.52 -0.7;0.69 -0.43;0.56 -0.09;0.29 -0.18;0.34 -0.44;0.51 -0.41;0.66 -0.28;0.81 -0.36;0.76 -0.64;0.74 -0.71;0.61 -0.84;0.58 -0.5;0.45 -0.16;0.41 -0.46;0.41 -0.6;0.28 -0.61;0.19 -0.7;0.16 -0.27;0.31 -0.34;0.45 -0.28;0.3 -0.02;0.62 0.09;0.68 -0.09;0.8 -0.21;0.89 -0.61;0.98 -0.38;0.57 -0.31];
\\
++
To define some terminology here. We have two classes (red and blue points now named x1 and x2, respectively). Again, classes can be response to different kinds of stimuli (horizontal vs vertical orientation) or different kinds of tasks (adding vs subtracting numbers), anything you like. Each row is an "instance" of a response to each class. It has two columns, which would be the responses of voxel 1 and voxel 2 to that instance.
Linear classifiers are defined by the "projection line". This is the line (and its a line even if you have more than two dimensions for your data) upon which you project all new instances. Projection means to drop a line from each instance (which is a point in the space) perpendicular to the projection line. The point where this intersects the projection line is called the projection of the point and is also the point that is closest to the instance on the projection line. It is easy to find the projection - you just take the [[http://en.wikipedia.org/wiki/Dot_product|dot product]] of the vector that defines the projection line and the instance.
Once you find the projection of each instance on to the projection line you compare it against a bias point if it is less than the bias point you are going to say the instance comes from the first class and if it is larger than the bias point it comes from the second class. That's all there is to a linear classifier. Just to repeat, you do the following:
- Get the projection of each new instance on to the projection vector by taking the dot product of the instance and the vector that defines the projection line
- Compare that to the bias point (note that the projection is always a scalar value)
But, how to define the projection line and the bias point. Let's start with the easiest of all classifiers. We will define the projection line as the line that intersects the means of the two class distributions. Let's see what that does. First let's get the mean of both distrubtions:
m1 = mean(x1)
m2 = mean(x2)
We now pass it into classifyTest to have classifyTest classify based on this projection line. Do the following:
classifyTest(x1,x2,'projectionLine',[m1;m2]);
You should see a graph that looks like the following:
{{:mrtools:classifysimple.png?337x303|}}
Note that there are two lines that are drawn. A white dashed line which is your projection line. Try to visualize what each point would look like if you drew a perpendicular from the white dashed line to each point and placed each point (instance) onto that projection line. There is a solid white line which is the decision boundary it is defined to be perpendicular to the projection line and is centered on the projection line at the bias point (which has been chosen for us as the point in between the two centers of the distribution that we passed in). Note that the decision boundary is in general an n-1 dimensional plane. That is, if we had k voxels instead of the 2 voxels of data in this case, the decision boundary would be a k-1 dimensional plane. The projection line would still be a line. The decision boundary you can think of as a wall that separates points into two different classes.
Ok. That was easy, but what happens if there are overlapped points? Let's make some points that are overlapped:
[x1 x2] = classifyTest('type=none');
You should make points that look overlapped like this:
{{:mrtools:classifyoverlapped.png?337x303|}}
Again get the means of the two classes to define a simple classifier that simply projects each point on to the line defined by the means:
m1 = mean(x1);
m2 = mean(x2);
classifyTest(x1,x2,'projectionLine',[m1;m2]);
It should look like this:
{{:mrtools:classifyslightlyharder.png?337x303|}}
Note that there are some misclassified points that are marked in x that end up on the wrong side of the decision boundary. Note that this is not a good way to test a classifier since we are testing with the same set of data that we built with - this can cause very serious problems of overfitting - more on that later and how to do cross-validation to avoid these problems.
===== Bias point =====
Remember that we said that the linear classifier is just a projection line and that we just use the dot product of the vector that defines that projection line and each instance and compare that to a bias point? Let's go through that step-by-step. We have been defining the classifier according to the projection line that goes through the two means of the distribution and that is what we pass into the function classifyTest. However, what does it mean the vector that defines the line? Well, this is actually meant to specify just the direction of that line which is defined as a vector w as follows:
w = m2-m1
Quite simple, huh? You can pass that into classifyTest to make sure that gives the expected results:
classifyTest(x1,x2,'w',w);
Now, let's talk about the bias point. How do we calculate that? The code has been setting it for you (that is, behind the scenes when you define the classifier weights, a calculation is being made for where the bias point is, and now we are going to compute that ourselves). Its worth understanding what the code does to define this bias point. So, lets just enter an arbitrary bias point that won't work. Like 0:
classifyTest(x1,x2,'w',w,'biasPoint',0);
You should see that the output has the decision boundary in the wrong place and so more instances get misclassified:
{{:mrtools:classifyzerobias.png?337x303|}}
To find a better boundary, we can find out where each one of the instances in each distribution project on to the classifier line and then find the midpoint between these two distributions. So, to get the projection we simply take the dot product of the weight vector times each one of the instances;
for i = 1:size(x1,1)
p1(i) = dot(w,x1(i,:));
end
Note that dot product is simply multiplying each dimension of each instance by the corresponding element of the weight vector and summing over that. That is why we call it a 'weight' vector. The larger the value in w, the more weight is given to the value of that dimension. So, just to be clear, for example dot product for the first instance is calculated as:
firstInstance = x1(1,:);
projection = sum(w(1)*firstInstance(1) + w(2)*firstInstance(2));
The value projection should give you the same value as p1(1). Now, on with computing the best bias point. Lets find the projection of the other class on to the projection line:
for i = 1:size(x2,1)
p2(i) = dot(w,x2(i,:));
end
Ok. So let's look at the projection values for each class as distributions. Note that this is now one dimensional since it is the scalar values on the projection line. This is the point of the classifier - it turns the k-dimensional (in our case 2D) instances into a one dimensional scalar value which can best distinguish the two classes. Let's look at that:
figure
[n binX] = hist(p1);
bar(binX,n,'FaceColor',[1 0 0]);
hold on
[n binX] = hist(p2);
bar(binX,n,'FaceColor',[0 0 1]);
You should see something like this:
{{:mrtools:classifyoned.png|}}
Notice that the distributions are pretty well separated. That's because the classifier is doing a good job of turning our 2 dimensional instances into one dimensional points (the projection onto the projection line) that best separates out the two classes.
Now, to best separate out the two classes, we just need to find a good boundary in this one dimensional space that separates out the two distributions. The center point between the mean of the two distributions will do (assuming equal variance of the red and blue distributions):
mp1 = mean(p1);
mp2 = mean(p2);
biasPoint = (mp2+mp1)/2;
You can draw that biasPoint on to the distributions to see what it looks like:
vline(biasPoint);
You should see that with this bias point you only misclassify a few of the points. To see it in the original space, we can set it in classifyTest:
classifyTest(x1,x2,'w',w,'biasPoint',biasPoint);
Note, now you know all it takes to build and test a linear classifier. To build you need to specify a projection vector (i.e. a weight vector) and the bias point. To test, you take the dot product of a new instance (not in the build set) and the weight vector and compare against the bias point. So, you can classify any instance you want by hand. For example, say the fourth instance in the first class:
testInstance = x1(4,:);
dot(w,testInstance) < biasPoint
This should return true, saying the point is in the first class (unless it happens to be a point that is misclassified). Take another point from the second class and it should return false (again, unless it is misclassified):
testInstance = x2(7,:);
dot(w,testInstance) < biasPoint
Check all the values in p1 and p2:
p1 < biasPoint
p2 > biasPoint
These should be mostly 1 - correct classification (note again that this is not a real test since we are building and testing with the same data set). But, anyway, if you want to compute the percent correct, you would do:
(sum(p1 < biasPoint) + sum(p2 > biasPoint))/(length(p1)+length(p2))
===== Naive-bayes classifier =====
So, we've been working with a very simple classifier (projection line in the direction between the two means). When does this fail? Well, for one thing it doesn't take into account different variance in different dimensions of the data. One voxel may have the same mean difference between different classes but be so noisy as to provide no reliable difference compared to a voxel that has the same mean difference but very little variance. We can set up this situation by making up some data using classifyTest:
[x1 x2] = classifyTest('type=none');
Something like this:
{{:mrtools:classifyvar.png?337x303|}}
Copy and paste the data below if you want to use the exact same data as in the tutorial instead of defining your own points:
++Data to copy and paste|
\\
\\
x1 = [-0.37 0.72;-0.59 0.8;0.46 0.71;0.24 0.62;-0.16 0.5;-0.49 0.54;-0.6 0.65;-0.83 0.57;-0.91 0.41;-1.11 0.68;-1.34 0.63;-1.33 0.45;-1.07 0.57;-0.82 0.82;-0.65 0.51;-0.62 0.5;-1.08 0.77;-1.54 0.8;-0.13 0.74;0.06 0.58;0.12 0.79;-0.24 0.83;-0.03 0.56;0.28 0.43;0.47 0.57;0.39 0.79;-0.44 0.87;-0.96 0.7;-0.04 0.24;-0.07 0.33;-0.08 0.5;-0.26 0.5;-0.47 0.44;-0.53 0.32;-0.67 0.48;-0.7 0.27;-0.92 0.32;-0.87 0.1;-0.19 0.11;0.31 0.15;0.72 0.21;0.65 0.43;0.65 0.62;0.48 0.77;0.18 0.88;-0.02 0.88;-0.32 0.9;-1.27 0.48;-1.46 0.6;-0.92 0.4;-0.32 0.61];\\
\\
x2 = [0.0 0.45;-0.25 0.36;-0.63 0.34;-0.69 0.39;-0.78 0.21;-0.99 0.14;-0.85 -0.03;-0.38 -0.1;0.02 0.01;0.31 0.0;0.6 -0.01;0.85 0.05;1.05 0.08;1.13 0.11;1.2 0.36;0.99 0.58;0.86 0.56;0.51 0.55;0.25 0.43;0.11 0.43;0.1 0.2;-0.14 0.28;-0.58 0.16;-0.67 0.24;-0.31 0.25;-0.13 0.02;0.3 0.32;0.58 0.23;0.85 0.25;0.83 0.42;0.29 0.3;0.4 0.03;0.46 0.27;0.65 0.23;0.84 0.07;0.94 0.32;0.35 0.4;1.14 0.21;1.28 0.2;1.08 0.44;0.56 0.39;-0.34 0.08;-0.59 0.07;-0.76 0.07;-0.89 0.26;-1.0 0.36;0.19 0.11;0.65 0.1;1.08 0.17;1.08 0.33];\\
\\
++
Now, let's try the mean classifier:
mx1 = mean(x1);
mx2 = mean(x2);
w = mx2-mx1;
classifyTest(x1,x2,'w',w);
This classifier basically weights both dimensions equally even though clearly there is more variability in the dimension on the abscissa:
{{:mrtools:classifyvarmean.png?337x303|}}
So, what's the problem with this solution? Well, if you were to do it by eye, you would probably want the decision boundary to be more horizontal - that is, you would want to ignore the differences in the dimension on the abscissa which have high variability, but weight more the differences in the ordinate which has much less variability. Another way to put that is that you want to weight each dimension by the inverse of its variability. The higher the variability in the dimension the less you want to weigh it. So, let's just do that. First let's calculate the standard deviation each dimension (we do this pooling across both classes):
s = std([x1 ; x2])
You should see that s is larger for the first dimension than the second. That is there is more noise on the first dimension (think voxel) than the second dimension (voxel). So, you should adjust the weights accordingly. That is, make the weight on the first dimension smaller because its standard deviation is larger. To do so, multiply each dimension, by the **inverse** of the standard deviation.
w = w * 1./s;
Now, let's see if that does a better job:
classifyTest(x1,x2,'w',w);
{{:mrtools:classifyvarbystd.png?337x303|}}
Better. The decision boundary has now rotated slightly more horizontal and gets the fact that there is less variability in the dimension on the ordinate than the dimension on the abscissa.
This solution which assumes that dimensions do not covary is sometimes called a Naive-bayes classifier. Personally, that name doesn't do much for me. But they say that the "naive" part comes from naive thinking that dimensions don't covary (more in the next section about this) and the Bayes becuase this is a Bayes optimal solution for gaussian distributions with equal variances and no covariance.
===== Fisher Linear Discriminant =====
So, incorporating the inverse of the standard deviation is a good idea. But, is this enough? Consider the following case in which there is covariance between the response of the two dimensions. That is the higher the response is in one dimension, the higher the response in the other dimension. This can often happen with fMRI data in which neighboring voxels responses covary:
{{:mrtools:classifycovary.png?337x303|}}
You may want to copy and paste this data set and try it out yourself:
++Data to copy and paste|
\\
\\
x1 = [-0.62 0.22;-0.88 -0.12;-0.7 -0.14;-0.74 0.13;-0.81 -0.16;-0.81 0.01;-0.63 0.02;-0.44 0.3;-0.29 0.37;-0.43 0.5;-0.52 0.34;-0.26 0.5;-0.0 0.53;0.02 0.85;-0.09 0.76;-0.09 0.59;-0.09 0.54;-0.16 0.57;-0.24 0.63;0.12 0.69;0.23 0.95;0.14 0.9;0.04 0.74;0.29 0.76;-0.11 0.36;-0.39 0.15;-0.52 0.14];\\
\\
x2 = [-0.98 -0.46;-0.83 -0.36;-0.68 -0.23;-0.56 -0.09;-0.41 0.0;-0.28 0.1;-0.13 0.25;0.01 0.33;0.14 0.45;0.1 0.32;0.09 0.21;-0.2 0.13;-0.18 -0.06;-0.18 -0.17;-0.33 -0.14;-0.43 -0.14;-0.42 -0.47;-0.69 -0.48;-0.82 -0.48;-0.89 -0.48;-1.03 -0.62];\\
\\
++
Let's try the tricks we have in our playbook for dealing with this. First the mean classifier:
m1 = mean(x1);
m2 = mean(x2);
w = m2-m1;
classifyTest(x1,x2,'w',w);
And, we get this:
{{:mrtools:classifycovarymean.png?337x303|}}
Terrible! What happened? Well, look at the distribution on the abscissa they have a slight difference with the blue to the left of the red and on the ordinate the red is slightly higher than the blue, so the border is correct for the means, but its missing the dead obvious - it should be tilted up and to the right instead of down and to the left.
Ok. So let's go with trick two in our playbook. Let's scale by the inverse of the variances:
s = std([x1;x2]);
w = w.*(1./s);
classifyTest(x1,x2,'w',w);
And we get:
{{:mrtools:classifycovarystd.png?337x303|}}
And that helped us, not one bit. Why? Well the variability in each dimension is about the same so weighting by the variability doesn't change a thing.
So, that gets us to thinking about the defining characteristic here - that the responses in the two dimensions covary. We need to use that fact to define the decision boundary. To do so, let's start by calculating the covariance:
c = cov([x1;x2])
If you take a look at c:
c =
0.126125531914894 0.11893085106383
0.11893085106383 0.172812588652482
Note that the values on the diagonal are just the variances in each of the dimensions (compare to what var([x1;x2]) gives you. And the value in the top right (symmetrical with bottom left) is the covariance between the two dimensions. So, how should we use this? Well, before we scaled by the inverse of the variances. Doing essentially the same with the covariance matrix here just generalizes that. So, let's scale the mean classifier with the inverse of this covariance matrix:
w = inv(c) * (m2-m1)';
And, Bam! That does the trick:
classifyTest(x1,x2,'w',w);
{{:mrtools:classifyfisher.png?337x303|}}
Why does this work? Well because it takes into account the means, properly weights it by the inverse of the variances (remember that the variances are on the diagonal of the covariance matrix **and** it takes into account the covariance of the two dimensions. Turns out this formulation - scaling the mean classifier by the inverse of the covariance matrix - is the **right** thing to do (in a Bayesian optimal kinda way) to solve any classification problem in which the variables are gaussian distributed with equal variance. It has a name. It's called the Fisher Linear Discriminant (remember that, cause it's a great solution!).
===== Support vector machine =====
So, Fisher Linear Discriminant works ideally for identically distributed gaussian data - but are there times when it fails? You, bet! When the distributions are not gaussian. We'll get into one way that the fail (and actually a way that all linear classification schemes fail), but first let's talk about support vector machines.
An alternative to Fisher Linear Discriminant for finding a good decision boundary starts from consideration of what makes a good decision boundary. Well, one might think, a good decision boundary is one such that the distance between that boundary and the points of each of the two classes is maximized. For support vector machines, this is called "maximizing the margin" - the margin being the space around the decision boundary without any points from either distribution. So, what does that look like? Make two distributions like this:
{{:mrtools:classifysvm1.png?337x303|}}
++Data to copy and paste|
\\
\\
x1 = [-0.49 0.7;-0.64 0.78;-0.67 0.65;-0.67 0.22;-0.4 0.22;-0.26 0.4;-0.26 0.64;-0.38 0.92;-1.23 0.79;-1.24 0.36;-1.33 -0.07;-0.92 -0.37;-0.66 0.4;-0.98 0.52;-0.83 0.03;-0.31 0.04;-0.03 0.67;0.27 0.97;0.14 1.14;-0.62 1.18;-0.87 1.25;-0.87 0.84;-0.9 0.27;0.02 0.43;-0.12 0.92;-0.12 0.29];\\
\\
x2 = [;-0.13 -0.29;0.06 0.08;0.18 0.3;0.6 0.6;0.74 0.94;1.01 1.24;1.12 1.41;1.28 1.41;1.4 1.23;1.1 1.09;1.38 0.69;0.76 0.53;0.99 0.36;1.12 0.84;0.94 0.85;1.24 0.73;1.28 0.36;0.49 0.36;0.44 -0.05;0.16 -0.19;0.68 -0.3;1.19 -0.3;0.51 -0.39;0.13 -0.37;-0.04 -0.4;0.74 -0.06;1.14 -0.06;0.82 0.23;0.5 0.13;0.95 0.44;1.31 0.47;1.31 0.09;1.33 -0.22];\\
\\
++
And let's calculate the decision boundary that maximizes the margin with classifyTest:
classifyTest(x1,x2,'type=svm');
And we get this:
{{:mrtools:classifysvmwithmargin.png?337x303|}}
If you look at this there is a solid white line (the decision boundary) and the dotted lines neighboring define the margin - the largest distance from that boundary to any point in either distribution. The instances that are contacted by the margins are highlighted with a white border. These instances are called "support vectors" because they are the vectors (i.e. instances) that //support// the solution. If you changed those two points (like removed them), you would get a different solution to the problem - i.e. a different decision boundary that "maximizes the margin". Indeed, you could remove any other instances from the distributions, the solution wouldn't change at all. It is only these support vectors that matter. Thus the name.
So, how did we calculate this decision boundary that maximized the margin? Well, you could think of it like this. We could try every orientation and position for the boundary calculate the margin for that decision boundary and then choose the decision boundary that had the largest margin. That would work, but would take a long long time. Instead, there is a clever math solution to this. It turns out you can compute the solution by converting the problem of maximizing the margin into a dual problem which if solved can be converted into the solution you want and then use a well known quadratic programming solution to that problem. So, the solution can be calculated very efficiently. The math is very clever - much too clever for me to explain, but there are a number of good sources if you are interested in learning more (Pattern Classification, Duda, Hart & Stork p. Chapter 5 ; A Tutorial on Support Vector Machines for Pattern Recognition, Christopher J.C. Burges, Data Mining and Knowledge Discovery 2, 121-167 (1998) and An introduction to kernel-Based learning algorithms, Muller, Mika Ratsch Tsuda and Scholkopf, IEEE Transactions on neural networks 12.2 (2001))
One thing that may be a bit mysterious, is, what if there is no margin in that the two distributions overlap, like (make some points like the following):
{{:mrtools:classifysvmoverlapped.png?337x303|}}
Well, there is a sort of fudge factor parameter C. The larger that C is the more tolerant the algorithm because. That is, if C is 0, then the solution will fail because it can't tolerate any instance being on the wrong side of the distribution. So, for example, if we set C to 0:
classifyTest(x1,x2,'type=svm','C=0');
We get this - total failure:
{{:mrtools:classifysvmtotalfailure.png?337x303|}}
If we set C to a positive value, you get a reasonable boundary since it ignore the fact that some of the instances are on the wrong side:
classifyTest(x1,x2,'type=svm','C=100');
{{:mrtools:classifysvmc100.png?337x303|}}
Note that it still computes "support vectors" as before, but it ignores the fact that there are other instances within the margin and even on the wrong side.
Play around with changing different C values to see the effect. How would you set this value in practice? Well, you could either just set it arbitrarily to some big number or you could use cross-validation to find an optimal C value on some test set of data.
Now, you should go back and try the svm solution on any of the data that we used before - for many it will end up giving you almost identical solutions as Fisher Linear Discriminant. So, why bother? Well, the real beauty of svm is that you can handle non-linear boundaries.
So, here is a problem that busts any linear classifier including Fisher - the dreaded xor. Xor (exclusive or) is a really, really simple function - just return if one of the inputs is high, but not both. In graphical form, it looks like the following:
{{:mrtools:classifyxor.png?337x303|}}
++Data to copy and paste|
\\
\\
x1 = [-0.58 0.71;-0.72 0.62;-0.72 0.24;-0.48 0.22;-0.45 0.39;-0.44 0.55;-0.61 0.56;-0.6 0.38;-0.74 0.41;-0.74 0.89;0.0 0.01;0.0 -0.06;-0.08 -0.15;-0.08 -0.43;0.06 -0.43;0.16 -0.43;0.21 -0.15;0.18 -0.0;0.05 -0.18;0.04 -0.29;0.22 -0.3;0.31 -0.22;0.31 -0.16;0.13 0.04;0.12 -0.14];\\
\\
x2 = [-0.11 0.91;-0.13 0.69;-0.13 0.47;0.02 0.45;0.19 0.46;0.18 0.31;0.01 0.32;0.03 0.58;0.09 0.81;0.24 0.76;0.25 0.62;0.32 0.48;-0.35 -0.0;-0.68 -0.0;-0.74 -0.04;-0.72 -0.29;-0.72 -0.43;-0.51 -0.43;-0.33 -0.35;-0.45 -0.12;-0.58 -0.14;-0.56 -0.36;-0.67 -0.5;-0.83 -0.3;-0.55 -0.22];\\
\\
++
This is an xor problem because you have to only accept points into one category (red) if it has a high value (like a 1 for the logical case) in one dimension and a low value (0) in the other dimension, but not if it has two high values (1 and 1) or two low values (0 and 0). An svm classifier fails miserably:
classifyTest(x1,x2,'type=svm')
{{:mrtools:classifyxorsvmfail.png?337x303|}}
Why does it fail? Well, basically no matter how you split it, you can't cut a straight line through these points to get everything right. Any linear classifier including Fisher Linear Discriminant will fail.
So, what to do? Well, we need a non-linear boundary of course. But, well, that's hard. So very clever people figured out a simpler solution - just add non-linear combinations of your dimensions to your data and then use the good old linear classifier in this new space to do classification. Huh, what? Well take a look at the following:
{{:mrtools:classify1dxor.png?168x151|}}
This is another version of the xor problem but in 1 dimension. There is no boundary that you can put between these points to correctly classify them all. But, if you just add a second dimension which is calculated as the square of the first dimension, you get a 2 dimensional problem which is now separable:
{{:mrtools:classifyxor2d.png?168x151|}}
So, just adding a new dimension (calculated as a square of the first dimension) to your data allows you to use a basic linear classifier to beat the problem. So, we could just add new dimensions to our data by computing squares and so-forth, but then our data would get huge. It turns out for support vector machines (and this is a really nice property) the data only appear in dot products with each other and so instead of calculating non-linear new dimensions you can just transform the dot products and this is really quick and easy to do. So, here is a solution with svm in which we add a squared dimension by using this trick of squaring the dot products of the data:
classifyTest(x1,x2,'type=svm','kernelFun=polynomial','kernelArgs=2','C=1')
{{:mrtools:classifysvmxorpoly.png?337x303|}}
Note that we don't actually draw the boundaries for technical reasons, but you can see the cyan/magenta boundary fitting the data we made properly and that there are no points that are misclassified. And, here is the rub. Overfitting. With non-linear classifiers you can easily overfit data by getting boundaries that neatly follow the randomness of the data that you have and not generalize well to new data. Consider the following data:
++Data to copy and paste|
\\
\\
x1 = [-0.61 0.82;-0.41 0.67;-0.46 0.42;-0.66 0.54;-0.96 0.56;-0.89 0.25;-0.45 0.06;-0.19 0.25;-0.23 0.54;-0.65 0.35;-0.77 0.81;-0.43 0.87;-0.26 0.8;-0.33 0.4];\\
\\
x2 = [-0.09 0.34;-0.36 0.28;-0.55 -0.08;-0.36 -0.29;-0.11 -0.4;0.25 -0.22;0.25 0.1;0.15 0.26;-0.02 0.13;-0.25 0.03;-0.09 -0.15;0.16 -0.05;0.04 -0.44;-0.31 -0.5;-0.52 -0.36;-0.27 -0.15];\\
\\
++
We can try a funky non-linear classification scheme (in this case instead of polynomial function we use a radial basis function - simply a gaussian function to transform into a new dimension):
classifyTest(x1,x2,'type=svm','kernelFun=radialbasis','kernelArgs=1')
And we get this:
{{:mrtools:classifysvmpoly.png?337x303|}}
Note the funky boundary evident - is this a good generalization of the data? Probably not. How does one avoid this. Two things. One, avoid non-linear classifiers unless you have a very good reason to believe that your data will need non-linear boundaries (like you know you have data with the xor problem). Second, use cross-validation.
===== Cross validation =====
There's not much to say about cross-validation except that you **must must** do it. Build the classifier on some data (build set) and set and test it on some left out data (test set). You can do this by splitting the data in halves (one half build and one half test). But, sometimes it is better to use more data to build to better fit the weights of the classifier and test on fewer instances. At one extreme you can do a leave-one-out classification (that is, build on all but one instance and test on the remaining instance to see how well you did - then cycle through each one of the instances as the left out test set). Another scheme is to use 4/5 of your data to build and test on another 1/5. Finally, for fMRI data, consider doing a leave-one-run-out cross-validation scheme. This is good because any serial dependencies of responses (due to hemodynamic lag and sluggishness) will not contaminate the test set with the end of the responses to the build set and vice versa.
To run a simple leave-one-out cross-validation on any of the data from above do:
l = leaveOneOut({x1 x2})
This should return this:
leaveOneOut) Class 1 has 14 instances with 2 dimensions
(leaveOneOut) Class 2 has 16 instances with 2 dimensions
(leaveOneOut) fisher classifier produced 90.00% correct and took 3 secs 64 ms
l =
type: 'fisher'
classifierParams: [1x1 struct]
whichClass: {[1 1 1 1 1 1 2 2 1 1 1 1 1 1] [2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2]}
classifierOut: {[1x14 double] [1x16 double]}
confusionMatrix: [2x2 double]
correct: 0.9
correctSTE: 0.0547722557505166
This shows a few things. One which class each instance got classified into (whichClss field). It also tells the overall % correct (90%). The confusionMatrix tells you how many times each category was classified as the wrong category:
>> l.confusionMatrix
ans =
0.857142857142857 0.142857142857143
0.0625 0.9375
It shows that 85% of the time category 1 was classified as category 1 and 94% of the time category 2 as category 2. Category 1 was classified 14% of the time as category 2 and category 2 was classified as category 1 6% of the time.
======= Classification on example fMRI data =======
Now we are going to try to do a basic classification analysis on some real data. For didactic purposes this is going to be a really easy classification - whether a stimulus was presented below or above the fixation point. The experiment was a simple localizer in which different visual stimuli (it is not important for the purposes of this tutorial to know more about what exactly these stimuli were) were presented above or below the fixation point in blocks of 16.2 seconds in a randomized order. The classifier will be build on some subset of the trials and tested on the left out portion - to see if we can predict which location the stimulus was presented on these test trials.
Now, typically, you may want to run classification analyses on more subtle differences, particularly responses that you think have distributed representation hard to get at with other means, but we will use this example just to get used to the basics of the analysis. Based on the topographic representation of space in visual cortex it should be very easy to find some voxels that respond more when the stimulus is above the fixation point and some voxels that respond more when the stimulus is below the fixation point. Combining the information from all these voxels using classification it should be very easy to predict where the sitmulus was presented.
===== Event-related analysis =====
Switch directories to the classification tutorial (classTut) and start mrLoadRet:
cd classificationTutorial;
mrLoadRet;
First, we're going to run a basic event-related analysis. You may want to review the [[:mrTools:tutorialseventrelated|event-related tutorial]]. Start by concatenating together all the scans using Analysis/Concatenate Time Series. Default parameters for the filtering are fine.
Switch to the new Concatenation group (Group drop down on top left) and check the scan info Edit/Scan/Info (⌘I). Just make sure that you correctly concatenated the four scans (scanList=[1 2 3 4]) and used the default filtering options (filterType = "Detrend and highpass" filterCutoff = 0.01)
Now run an event-related analysis (Analysis/Event Related Analysis) and use all the default options until you get to the following dialog box in which you will set the varname to upDown (Note that it is really important to put the variable name "upDown" into the dialog box otherwise it won't know what variable to compute event-related analysis for):
{{:mrtools:classifytut01.png|}}
Set the Clipping overlay Min (slider or text box in bottom left) to 0.2) and then switch the Base (drop-down in top left) to display the left occipital cortex flat patch (s017LeftOccSmall). You should see a map of the r2 values that looks like the following.
{{:mrtools:classifytut02.png?430x407|}}
Turn on the interrgator (Plots/Interrogate Overlay) and click on a few points in the top using the eventRelatedPlot and you should see something like this:
{{:mrtools:classifytut03.png?444x270|}}
Clearly larger responses for the trials when the stimulus was in the lower visual field, just as is expected for the topographic representation in early visual cortex. Clicking on voxels at the bottom, you should see this pattern reversed.
{{:mrtools:classifytut04.png?444x270|}}
===== Classification analysis =====
Now, it should be quite easy to build a classifier, right? Just by seeing whether voxels that represent the upper visual field are responding or the ones that represent the lower visual field are responding we should be able to predict where the stimulus was. Let's train a classifier to do this.
We need to get the instances of responses for each stimulus type. That is, we want a number representing the response to each trial from each voxel in an ROI. Let's do this for the left V1. Load the lV1 ROI from File/ROI/Load
You should see the ROI like this (check that Selected ROIs (Perimeter) is selected in the Display Mode drop down in the middle left.
{{:mrtools:classifytut05.png?430x407|}}
Note that the voxels with high r2 are right at the top and bottom of V1 just as expected (since that is where the lower and upper visual meridians are represented).
Now go to the Matlab command line and use getStimvol to get the stimvols for the upDown stimulus. That is, we want to know which volumes in the scan each stimulus began at. That is what is put into stimvol. Stimvol is a cell array with two elements. The first is all the stimvols for when the stimulus was in the upper visual field, and the second when the stimulus was in the lower visual field.
>> stimvol = getStimvol(getMLRView,'upDown')
(getStimvolFromVarname) taskNum=[1], phaseNum=[1], segmentNum=[1]
(getStimvolFromVarname) upDown=1: 10 trials
(getStimvolFromVarname) upDown=2: 10 trials
(getStimvolFromVarname) taskNum=[1], phaseNum=[1], segmentNum=[1]
(getStimvolFromVarname) upDown=1: 10 trials
(getStimvolFromVarname) upDown=2: 10 trials
(getStimvolFromVarname) taskNum=[1], phaseNum=[1], segmentNum=[1]
(getStimvolFromVarname) upDown=1: 10 trials
(getStimvolFromVarname) upDown=2: 10 trials
(getStimvolFromVarname) taskNum=[1], phaseNum=[1], segmentNum=[1]
(getStimvolFromVarname) upDown=1: 10 trials
(getStimvolFromVarname) upDown=2: 10 trials
stimvol =
[1x40 double] [1x40 double]
This shows you that there were 10 trials of each in each of the four component localizer scans for a total of 40 instances of each response type.
Now that we know the stimvols we need to get the responses to each stimulus presentation. We are going to do this by averaging the response in the time series for each of the voxels in left V1 across the time when the stimulus was presented (actually we time shift by a few seconds to account for the hemodynamic lag). First load the time series for lV1:
>> lV1 = loadROITSeries(getMLRView,'lV1')
(loadROITSeries) Loading tSeries for lV1 from Concatenation: 1 took 0 secs 766 msmin 00 sec)
lV1 =
color: 'magenta'
coords: [4x5397 double]
date: '05-Jul-2013 12:57:25'
name: 'lV1'
notes: ''
sformCode: 1
viewType: 'Volume'
vol2mag: [4x4 double]
vol2tal: []
voxelSize: [1 1 1]
xform: [4x4 double]
scanNum: 1
groupNum: 2
scanCoords: [3x200 double]
n: 200
tSeries: [200x480 double]
Now in the field tSeries you have the time series (480 volumes) for each of the 200 voxels that we have in left V1. We next set a sort order. This is used in times when you are only going to use a subset of your voxels in the classification (for example, you may want to use only a one hundred not the full two hundred to equalize ROI size across idfferent visual areas). But, which voxels to use. The sort index tells the programs what order to use. For right now, we don't care cause we are going to use all the voxels in the ROI, so let's just set this to the order the voxels happen to be in the ROI:
lV1 = getSortIndex(getMLRView,lV1);
We now need to snip this up into the pieces of response after each stimulus presentation and average for the duration of the stimulus. We use getInstances:
>> lV1 = getInstances(getMLRView,lV1,stimvol,'n=inf')
(getInstances) Creating 40.0 instances for lV1 with Inf voxels by taking mean from vol 1:6 (2.70s-16.20s) took 2 secs 174 msmin 00 sec)
took 2 secs 174 ms
lV1 =
[1x1 struct]
Note that getInstances turned lV1 into a cell array (usually we run in on multiple ROIs, so let's just turn it back into a single ROI:
lV1 = lV1{1};
Now, we can grab the instances from the ROI which is stored under the classify field:
>> i = lV1.classify.instances
i =
[40x200 double] [40x200 double]
So there are 40 instances of response for each category (stimulus up or stimulus down) and each instance is a 200 dimensional vector (1 dimensional for each voxel - compare to above where we just had 2 dimensions).
Well, we can just run leaveOneOut on this and see how we do:
>> leaveOneOut(i)
(leaveOneOut) Class 1 has 40 instances with 200 dimensions
(leaveOneOut) Class 2 has 40 instances with 200 dimensions
(leaveOneOut) fisher classifier produced 91.25% correct and took 10 secs 449 ms
ans =
type: 'fisher'
classifierParams: [1x1 struct]
whichClass: {1x2 cell}
classifierOut: {[1x40 double] [1x40 double]}
confusionMatrix: [2x2 double]
correct: 0.9125
correctSTE: 0.0315918798902503
91% correct is not bad. Note that this is defaulting to using fisher linear discriminant.
So, great. We can classify the stimulus what does that mean? Well it means that there are responses that are specific to whether the stimulus was presented above or below just as expected.If there was nothing useful, this number would hover around 50% and you would know that there probably is nothing very interesting in your data with respect to the variables you are trying to classify.
===== Weight maps =====
It is a good idea for any classification you do, to check to see what the spatial arrangement of voxels is that allows the classifier to work. Let's look at this. First lets prepare the ROI by setting its sortIndex (more on this later - but, for now we just set dummy values)
>> weightMap = getClassifierWeightMap(getMLRView,lV1,stimvol);
(getInstances) Creating 40.0 instances for lV1 with Inf voxels by taking mean from vol 1:6 (2.70s-16.20s) took 2 secs 114 msmin 00 sec)
took 2 secs 114 ms
(buildClassifier) Building fisher classifier for ROI lV1 took 0 secs 105 msmin 00 sec)
What this did was build a classifier to classify the stimulus classes in stimvol using all the voxels in the ROI (no cross-validation) and it returns the weightMap.
Now weightMap contains the map of weights for each voxel in the ROI (actually it has the same dimensions as the scan 64x64x38 where values with a weight calculated are set to that weight and all other values contain NaN).
Now let's display it: First drop the erAnal analysis that you are currently viewing by doing View/Remove Analysis. Now from the command line do:
mrDispOverlay(weightMap,lV1.scanNum,lV1.groupNum,getMLRView,{'up vs down'},'colormapType','normal','cmap',splitcmap,'range=[-9 9]','clip=[-9 9]');
You should see the following:
{{:mrtools:classifytut06.png?430x407|}}
You should see that negative weights are near the top and positive weights near the bottom. This is exactly what we expect based on the topography. Voxels that have a larger response for the stimulus when presented at the bottom have negative weights and voxel with larger responses for when the stimulus was presented at the top have positive weights. You can see this more clearly, but clipping out all the weights that are in the middle ranges. This is done by crossing the min/max ranges of the Clipping Overlay (bottom left). That is, set the min to 0.5 and the max to -0.5 and you should see the following:
{{:mrtools:classifytut07.png?430x407|}}
Clearly the voxels that have high weights (contribute information to the classifier) are all just where we expect them to be at the upper and lower parts of V1 that represent the vertical meridians where the stimuli were places.
===== Number of voxels by accuracy plots =====
Sometimes it is interesting to know how much better performance of the classifier gets as a function of how many voxels enter the analysis. If only a few voxels are needed it suggests strong selectivity - if you need many it suggest weak selectivity in a large distributed pattern. If you add too many voxels you might just be adding noise and performance can decline. So, let's see what happens with this data set.
First we have to decide what order to include voxels in the analysis. This is a very important step. Typically what we do is use an independent localizer of some sort (perhaps retinotopy) to define which voxels we think will have a stimulus driven response. In this case, we are going to cheat a little and use the r2 from the event-related responses. This is cheating because it is a violation of cross-validation (choosing which voxels to use on data from the test set) - we should really choose voxel using data only from the training set or better yet a completely independent data set so that we don't break cross-validation.
But, with that in my mind, we need to get the r2 values from the event-related analysis. Going to the GUI and remove the mrDispOverlay analysis of the classifier weights (View/Remove Analysis). Now go load the erAnal analysis File/Analysis/Load load erAnal/erAnal.mat
Then from the command window get the r2 values of the overlay:
r2 = viewGet(getMLRView,'overlayData',1);
Now we are going to set the sort order according to this r2 map (remember before we just set the sort order to the random order that the voxels happen to be in the ROI):
lV1 = getSortIndex(getMLRView,lV1,r2);
Now, let's compute leaveOneOut classification performance as a function of how many voxels we use. This may take a few minutes to compute:
voxNums = [1:20:lV1{1}.n];
correct = [];
for nVox = voxNums
lV1 = getInstances(getMLRView,lV1,stimvol,'n',nVox);
instances = lV1{1}.classify.instances;
l = leaveOneOut(instances);
correct(end+1) = l.correct;
disp(sprintf('%0.2f%% correct with %i voxels',100*correct(end),nVox));
end
Now if we plot it, we can see that basically we always get the best performance even if we use 1 voxel!
plot(voxNums,correct,'ko-');
xlabel('Number of voxels (n)');
ylabel('Classification accuracy (percent correct)');
{{:mrtools:classifytut08.png?337x303|}}
Just to prove that the order matters, let's reverse the sortindex so that now the worst voxels (in terms of how much they respond to the stimulus - not necessarily how well they differentiate between stimuli) are first.
lV1{1}.sortindex = fliplr(lV1{1}.sortindex);
Now do the classification again:
voxNums = [1:2:10 20:10:100 120:20:lV1{1}.n];
correct = [];correctSTE = [];
for nVox = voxNums
lV1 = getInstances(getMLRView,lV1,stimvol,'n',nVox);
instances = lV1{1}.classify.instances;
l = leaveOneOut(instances);
correct(end+1) = l.correct;
correctSTE(end+1)
end
Now you only get good performance when you add the best voxels back.
plot(voxNums,correct,'ko-');
xlabel('Number of voxels (n)');
ylabel('Classification accuracy (percent correct)');
{{:mrtools:classifytut09.png?337x303|}}
===== Classify by hand =====
Just for fun, let's see if we can classify the data by hand. Let's get the instances again for two voxels:
lV1 = getSortIndex(getMLRView,lV1,r2);
lV1 = getInstances(getMLRView,lV1,stimvol,'n=2');
instances = lV1{1}.classify.instances;
Now we can plot the points:
classifyTest(instances{1},instances{2},'tightAxis=1','type=none')
{{:mrtools:classifytut11.png?337x303|}}
And we can try the mean classifier:
m1 = mean(instances{1});
m2 = mean(instances{2});
w = m2-m1;
classifyTest(instances{1},instances{2},'tightAxis=1','w',w);
{{:mrtools:classifytut12.png?337x303|}}
Already that does well. As an exercise try it with naive-bayes or fisher or svm and see what you get.
We can also try to build with all 200 voxels:
lV1 = getInstances(getMLRView,lV1,stimvol,'n=inf');
instances = lV1{1}.classify.instances;
But then it is hard to visualize all 200 dimensions so we need to project into a smaller space. We could do this in many ways. Some people do this as a first step to classification to get rid of noise, but its risky. Which dimension is the best dimension for classification? It could be dimensions that have the most variance in them - but that variance may not be associated with responses that are **different** for different categories. Nonetheless, we project onto these dimensions - the ones that account for the most variance in the data. We can do this by using PCA:
pcaBasis = pca([instances{1};instances{2}]);
This is essentially a rotation of the data space so the orthogonal axis point in directions of most variance within the data. We take the top two dimensions and project our data down on to those dimensions:
for iNum = 1:40
x1(iNum,1) = dot(instances{1}(iNum,:),pcaBasis(:,1)');
x2(iNum,1) = dot(instances{2}(iNum,:),pcaBasis(:,1)');
x1(iNum,2) = dot(instances{1}(iNum,:),pcaBasis(:,2)');
x2(iNum,2) = dot(instances{2}(iNum,:),pcaBasis(:,2)');
end
Then the data in these two dimensions look like this:
classifyTest(x1,x2,'tightAxis=1','type=none')
{{:mrtools:classifytut13.png?337x303|}}
And classifying using Fisher like this:
classifyTest(x1,x2,'tightAxis=1','type=fisher')
We get:
{{:mrtools:classifytut14.png?337x303|}}
-jg.
The following are some introductory lines to the functionality of the mrLoadRet-4.5 Graphic User Interface. All options in this section will either generate a .mat file, modify mrSESSION.mat or the .img files.
====== File Menu ======
Look in the file menu for functions that let you load / save **anatomy files**, **analyses**, **ROIs** and so on. There are also functions that let you print the currently displayed image, import / export data, ROIs, write out information about the session in a Readme.txt file, and quit the current session of mrLoadRet.
===== Base Anatomy =====
{{:mrtools:file-load.png|File Menu (Matlab on Mac OSX)}}
* **Load:** Select an anatomy file on which overlays will be rendered. Normally, this will be your **inplane** anatomy file which was collected during the scanning session at the same slice locations as your functional data.
* **Load from volume directory;** Select a **volume anatomy** file on which overlays will be rendered. By default, the file dialog window opens in the //volumeDirectory// folder specified in your preferences. Make sure that the qform and sform matrices of the file you select are correct. Normally, you will use the **volume** anatomy file that was used as the target in your alignment steps (see mrAlign documentation).
* **Use current scan** Lets you use the current functional scan as the background image. You will be prompted to select a time point from the 4D volume; the first frame in the sequence is 1. If you select time point '0', the mean 3D image across time will be used.
* **Load flat** Load flat patches created by the steps described in the
* **Load Surface** Load OFF surfaces created with TFI/SurfRelax. You will be prompted for an //outer surface// filename. Provide the gray matter (GM) surface from one subject/hemisphere. You will then be prompted to select the matching //inner surface// (white matter, WM), curvature, and anatomy file. See documentation for mrSurfaceViewer for more details.
* **Save** Save the currently used anatomy image with a default filename
* **Save As...** Save the currently used anatomy image and specify a filename
===== Analysis =====
* **Load** Load an analysis that was computed and saved earlier. Standard analyses that can be created by functions accessible in the GUI **Analysis** menu are //Time Series Statistics// (e.g. mean, median, across time), //Correlation Analysis// (standard Fourier-based analysis), //Event related analysis//, and //GLM analysis// (generalized linear models to produce statistical parametric maps).
* **Save** Save the currently selected analysis.
* **Save All** Save all analysis that have been opened in this session.
===== Overlay =====
===== ROI =====
===== Import =====
* **Group**
* **Time Series**
* **Overlay**
===== Export =====
* **Anatomy**
* **Time Series**
* **Overlay:** This function exports the current overlay into a NIFTI file. If a flat map is loaded, the user is asked how many different depths should sample the cortex. If she selects 10 for example, and the flat maps are 384 by 384, the flat map will be exported as a 384*384*10 volume, and each slice will be a different cortical depth.
* **ROI**
* **Image**
===== Write Readme.txt =====
===== Save mrSession =====
===== Print =====
===== Quit =====
====== Edit Menu ======
{{:mrtools:edit-menu.png|Edit Menu (Matlab on Linux)}}
===== Edit mrSession =====
Edit details of the scanning session such as a //description//, the //subject id//, //operator//, //magnet// and //pulse sequence//. Those were usually when running mrInitRet or [[mrTools:functionreference#mrInit| mrInit]].
===== Group =====
Manipulate groups (such as //Raw//, //MotionComp//, and //Averages//
* New Group - create a new group
* Delete Group - delete an existing group, e.g. if you realize that there were some mistakes in averaging across scans, you can delete the group //Averages//. The function will clean up the appropriate entries in the view variable that goes along with your session.
* Edit Group - This allows you to edit details for each scan in a Group: the //scan description//, the //total number of frames// and the //junk frames// (also called 'dummies') at the beginning of each scan that get disregarded in any analysis.
* Info - show information about each scan in each the current group. For the //Average// group, this will also show you which files contributed to each average. This is quite an important tool for debugging and checking your analyses.
===== Scan =====
===== Base Anatomy =====
===== Analysis =====
===== Overlay =====
* **Edit Overlay**
* **alphaOverlayExponent**: this value can be positive or negative. **alphaOverlayExponent** specifies how much the alphaOverlay masks the selected overlay. With positive value, low values in the alphaOverlay mask more than high values. With a negative exponent, it's the reverse. This is useful for masking overlays with p-value overlays (you want high values to mask more).
===== ROI =====
===== Preferences =====
Edit important preferences that affect the overall behavior of mrLoadRet. You will see a menu like the following:
{{:mrtools:prefdialog.png|}}
^ preference ^ value ^
| Site | (e.g. NYU, RIKEN, RHUL, Nottingham, ...) |
| verbose | Yes if you want to have messages and waitbars displayed as dialogs, No to have information printed to the terminal |
| interpMethod | Type of interpolation to use. Normally this is set to nearest for nearest neighbor interpolation -- on a flat map you will be able to see a crystalline pattern of how the voxels are getting warped. If you prefer a smoother looking overlay, try setting this to linear |
| maxBlocksize | Size of chunks of data to analyze at a time. If you are running out of memory, set lower. A good starting point is 250000000. If you are using 64 bit Matlab (there is a beta which works with mrLoadRet for Mac OS X), set this to a very high number. |
| volumeDirectory | The directory to default to when you load base anatomies from the VOlume directory |
| overwritePolicy | Method to use when analysis is going to overwrite an existing file |
| niftiFileExtension | Nifti file extension, usually .img but can be set to single file .nii |
| selectedROIColor | What color to use to draw the selected ROI. |
| roiPolygonMethod | Method used to create ROI polygons. The default roipoly function calls the line drawing function which can be very slow if you have already drawn a buch of lines (i.e. have some ROIs displaying). If you choose getpts instead, you will not have the lines drawn between points as you draw the ROI, but it will be much faster. getptsNoDoubleClick is similar to getpts but instead of double-click to end the selection you hit the return key (on some machines Matlab''s idea of what constitutes a double-click can be very slow) |
| roiCacheSize | Size of ROI cache, usually 100. |
| baseCacheSize | Size of base image cache. Set to the number of base slices you want to be able to quickly view |
| overlayCacheSize | Size of overlay image cache. Set to the number of base slices you want to be able to quickly view |
|roiContourWidth|this specifies the linewidth with which to draw the ROI. I find that, depending on the type of base anatomy, I find a value of 2 looks better/worse than a value of 1 (the default)|
====== Window Menu ======
===== New Window =====
Open up a new window, if you want to e.g. look at the same data set in the coronal and sagittal views simultaneously
===== New Graph Window =====
Open up a new graphing window, into which results from new calls to the interrogator functions or functions form the Plots menu get rendered.
====== Analysis Menu ======
* **Motion Compensation** For more info see: [[mrTools:analyses#Motion Compenstation|Motion Compensation]].
* **Average Time Series** For more info see: [[mrTools:analyses#Average Time Series|Average Time Series]]
* **Concatenate Time Series** For more info see: [[mrTools:analyses#Concatenate Time Series|Concatenate Time Series]]
* **Time Series Statistics** For more info see: [[mrTools:analyses#Time Series Statistics|Time Series Statistics]]
* **Correlation Analysis** For more info see: [[mrTools:analyses#Correlation Analysis|Correlation Analysis]]
* **Event Related Analysis** For more info see: [[mrTools:analyses#Event Related Analysis|Event Related Analysis]]
* **GLM Analysis** For more info see: [[mrTools:analyses#GLM Analysis|GLM Analysis]]
====== View Menu ======
====== ROI Menu ======
{{:mrtools:roimenu.png|}}
=====Create:=====
* **Contiguous Voxels (⌘C):** Selecting this submenu will create a new ROI and transform the cursor into a selection cross. The user then has to select a voxel that is not masked out in the overlay. The function will select all the voxels that are contiguous to this voxel and that are not masked out, across all the slices (3D contiguous ROI), and add it to the new ROI. The masking depends on the clip values of all the overlays of the current analysis.
===== Add =====
* **Contiguous Voxels (⌘B): ** Same as [[#Contiguous Voxels (⌘C)]] in the Create menu, but adds the contiguous regions to the current ROI
===== Subtract =====
* **Contiguous Voxels (⌘Y):** Same as [[#Contiguous Voxels (⌘C)]] in the Create menu, but subtracts the contiguous region from the current ROI
===== Combine =====
===== Restrict =====
===== Remove =====
===== Undo =====
===== Convert =====
* ** ConvertCoordinates:** Lists all the loaded ROIs. Attempts to identify in which coordinates system they are in (e.g., current base, or any other base that is loaded in the view). Then the user can choose to convert the ROIs into the current scan coordinate system, the current base coordinate system, or into any loaded base coordinate system; the user can also choose to adopt their xform.
* This feature is useful in several cases:
* **1:** ROIs are created in the coordinate system of the base anatomy that is loaded when the ROI is created. But there are cases where you need the ROI to be in a different system (e.g., if it has been created in low resolution and you want it in high resolution, or if it has been created on a flat map, but you need it in scan coordinates).
* **2:** The 'adopt xform' feature is useful when the alignment of the scan to the base has changed. If you want to keep the same ROIs, you can keep the same coordinates (so you don't have to redraw them), but you need to change the xform.
===== Show =====
====== Plots Menu ======
====== The main GUI window ======
==== Overlay Min/Max sliders ====
The "overlay min" and "max" sliders determine the range of values in the current overlay that are displayed on top of the base anatomy (with the current overlay colormap).
* if **maxClip > minClip**: (the usual!) display everything on the interval [ minClip, maxClip ]
* if **minClip == maxClip**: make overlay transparent, display interval is empty
* if **maxClip < minClip**: assume that user wants to show values more extreme that minClip and maxClip. That is, **show all vals < minClip, vals > maxClip** (This is useful for displaying e.g. extreme values in two-tailed t- or z-stats).
Note, that
* each overlay in the current analysis contributes to what's shown: multiple selections are combined by a logical //intersection//.
* the opacity of the selected overlay is further determined by the **alpha** value on the Alpha slider (or if an **alphaOverlay** is selected in Edit->Overlay->Edit Overlay).
{{:mrtools:main-gui-window-small.png|Main GUI window (Matlab on Linux) - labels coming!}}
On the new GUI (see [[:mrtools:tutorialsGLM_v2|GLM version 2]]) the controls are similar and are set as follows:
**Thresholds and Visibility of voxels**: The thresholds that are being used for each clipping overlay are set by the combination of Min/Max sliders and boxes, but can also be controlled through the **Edit->Overlay->Edit Overlay** window. These thresholds determine whether particular voxels are being displayed as part of the overlay color map or not, but in addition the (alpha) transparency for each voxel can also be controlled. This might be useful if you want to emphasize data with higher statistical significance values.
The overall transparency of the overlay controlled is by the **Alpha** slider, but the voxel-wise alpha transparency is determined by the alphaOverlay (to get info about the current overlay: **Edit->Overlay->Info**, to change it, select a different alphaOverlay after **Edit->Overlay->Edit Overlay**).
====== Preprocessing analyses ======
Preprocessing analyses are ones that create new time series. They are usually run before you run other analyses steps like correlation analysis and event-related processing.
===== Motion Compensation =====
Motion compensation in mrLoadRet-4.5 uses the same computational engine of mrAlign.
To access the motion compensation window go to the menu:
The window will let you:
- Choose the Group to which motion compensation should be applied.
- Choose the type of interpolation (how pixels on two images will be will be chosen for alignment).
- Crop the image (clicking the crop button will bring up a new window on which to choose the first and last slice utilized for cropping and will let you crop the image). Note that the images ARE actually cropped when computing the alignment and that the same crop region is used for contrast correction also.
- Correct the image for contrast/intensity (see below).
- Do time slice correction. If you choose to do slice time correction, then mrLoadRet will resample the data in time to correct for the fact that the different slices were acquired at different times during each TR. This can make a noticeable difference when precise timing is desired.
This option is reccommended for 2D acquisitions only (not if you are using the spiffy 3D EPI pulse sequence). Furthermore, It is important to have some junk frames at the beginning and at the end of your protocol (at least 3)
- Choose the scan to motion correct. Standard parameters for 3X3X3 cbi epi scans motion compensation are:
- Interpolation method: cubic
- Number of iterations: 3
- Robust: NO (Unclicked, unless your scan session has both online and raw files; see below)
- Crop: just remove a little bit of background around the images (it runs faster) N.B. The overall motion compensation process is long (nearly 1hr per scan on a MacPro 2 X 2.66 GHz Dual-Core Intel Xeon with 3GB of RAM) it is convenient to invoke files from your local drive (i.e., copy data files and mrTools-4.5 files to local directory before invoking dofmrinyu, otherwise any network error could stop the procedure)
==== Notes on intensity correction and robust estimation ====
Some more facts about these two options to help you decide when to use them:
=== Intensity Correction ===
** Pros:** corrects for intensity differences due, e.g., to coil placement, saturation bands, etc. ** Cons: ** blurs the data to do the calculation. **Reasons to use it: ** If the scans you are correcting come from different scanning days, or were taken with different acquisition parameters (e.g. raw and online), or have different voxel sizes, etc. Or if you're measuring hi-resolution data (because the saturation bands cause the signal to go to zero, and that big signal change will swamp out any smaller signal changes due to motion, making it impossible to measure or correct the motion-induced signals; the same goes for using 3D EPI sequences, because the signal in the first and last slices will go to zero). **Important Note:** If you use the intensity correction, you want to choose a crop region that is largely inside the brain so that it is the signal from within the brain that is used to calculate the correction.
=== Robust Estimator ===
** Pros: ** Less sensitive to outliers. **Cons:** prone to local minima and other problems of non-linear estimation, as compared with least-squares. **Reasons to use it:** If you have reason to believe your data suffers from outliers. In general, for all the problems listed above, you will use the intensity correction, but that correction does not work perfectly, and you may still be left with noisy outliers that you need to ignore in order for the motion compensation to succeed.
===== Average Time Series =====
This step allows you to average the time series for two or more scans. For example, if you are analyzing retinotopy data, you will want to average the expanding and contracting rings (with one reversed), and to average the rotating wedges (eg CW and reversed CCW). Or, for any other experimental design in which you had repeated scans (eg so as to reduce noise). Also, for averaging across subjects, or across days within the same subject.
Computes average of the time series. Time reverses and shifts as
specified. Output is a new tSeries file added to the specified group, and
that is in the coordinate frame of the base scan.
Order of operations is:
1) dump junk frames,
2) time shift,
3) time reverse,
4) average (transforming to coordinate frame of the base scan).
==== Using the function with the GUI ====
When you choose averageTseries from the Analysis menu, you will see a box like this:
{{:mrtools:averaging:ats_f1.png|}}
The choices you have when using the GUI correspond to the choices you have for the parameters listed below, so they are not repeated here.
Once you hit 'ok', the average will be computed and will be added as a new scan to the indicated group.
==== Using the function in scripts ====
When scripting, it can be called as follows:
[view params] = averageTSeries(view,params,varargin)
The input 'params' is optional (default: user is prompted via averageTSeriesGUI). If it is
provided, it must be a structure with all of the following fields:
^ fields ^ notes ^
|scanList| Vector of scan numbers to include in the average. Default: all of the scans.|
|tseriesfiles | Cell array of strings the same length as scanList, specifying the tseries filenames. Or 'any' to allow any match with the tseries files.|
|shiftList| Vector of integers the same length as scanList, each entry of which specifies the time shift (in frames, i.e., 1 means a shift of 1TR) for each scan. Default: all zero (no shift).|
|reverseList| Vector of the same length as scanList with non-zero entries indicating which scans to time reverse. Default: all zero (no time reversal).|
|baseScan| Number specifying scan which is used to specify the coordinateframe for the result. Default: first scan. |
|interpMethod| 'nearest', 'linear', 'cubic', or 'spline' as in interp3.|
|groupName| Group of scans that will be averaged. Default: current group of view.|
|aveGroupName| The new average scan is added to the specified group. If the group specified by aveGroupName does not exist, then a new group is created with this name. Default: 'Averages'.|
|description| Description string for the new average scan Default: 'Average from ' of scans: '.|
|fileName|The new tseries is saved in the specified filename. Default: 'tseries-mmddyy-hhmmss.img' or 'tseries-mmddyy-hhmmss.nii' where mmddyy = date and hhmmss = time. Chooses between .img and .nii, based on 'niftiFileExtension' preference.|
This function also saves an auxillary file (tseriesfileName.mat) that can be used to
% recompute as follows:
load FileName.mat
eval(evalstr)
Examples:
params = averageTSeriesGUI('groupName','Raw');
view = newView;
view = averageTSeries(view,params);
view = averageTSeries(view);
===== Concatenate Time Series =====
This step will take the scans you select, get rid of any junk frames, do a de-trending step using a hi-pass filter if you specify that, and then concatenate the resulting time series into a single time series that will be saved into a new group called Concatenation. This is an essential step if you plan to analyze data that was run over many consecutive scans.
Note that the concatenation code will keep track of many things for you: It will throw out the junk frames, but keep track of how many frames have been thrown out. It will internally make the timing consistent, so your stimulus timing files should be with reference to the original raw timing. For example, if you have a TR of 2 seconds, and have 5 junk frames at the start of each scan, and only begin your experimental protocol on the 6th time frame, your stimulus timing should reflect that the first stimulus is shown at 11 seconds, not 1 second.
Also note that the hi-pass filtering that is done here is a different function call from the filtering done in other parts of the code, such as percentTSeries.m, and the filter cutoff is referenced differently. For the concatenation code, the hi-pass filter cutoff is set in Hz (default 0.01 Hz) and in percentTSeries it's set in cycles/scan. It will also make the mean of your time series 1, that way if you try to divide by the mean again, it won't cause an error.
After concatenation, concatTSeries saves a structure that gives you information about the concatenation. You can access that with a viewGet:
v = newView;
concatInfo = viewGet(v,'concatInfo',1,'Concatenation');
It has the following fields:
>> concatInfo
concatInfo =
n: 8
whichScan: [1x1536 double]
whichVolume: [1x1536 double]
nifti: [1x8 struct]
filename: {1x8 cell}
path: {1x8 cell}
junkFrames: [0 0 0 0 0 0 0 0]
hipassfilter: {1x8 cell}
runTransition: [8x2 double]
concatInfo.whichScan tells you the number of the scan each volume came from. It has length the number of volumes in the concatenation (1536 for this case). whichVolume tells which volume of the original scan the volume came from. nifti, filename and path tell you the nifti header, filename and path of each original scan. junkFrames keeps the junkFrames of each original scan. hipassfilter is a cell array of filters that were used (it stores the fourier coefficients of the filter used). runTransition is an array which tells you where the run transitions happen, i.e. for each scan, the first column gives you which volume that scan starts at and the second column gives you which volume that scan ends at.
====== Postprocessing analyses ======
===== Correlation Analysis =====
==== Using the GUI ====
This allows you to calculate how well each voxel's timecourse correlates with a given modulation. This is useful for analyzing localizers, block-design experiments, and retinotopy experiments.
Take retinotopy, for example. After averaging the appropriate scans, you have a group ('Averages') with two scans, one with the timecourses related to the rotating wedges and the other with the timecourses related to the expanding rings.
Next, you compute a correlation analysis, so that you can visualize the cortical responses to the stimuli.
To run the correlation analysis from the GUI, choose the menu item Analysis/Correlation Analysis. For both scans, you will select (re)compute and set the number of cycles to 10. It should look like the following:
{{:mrtools:retinotopytutorial_coranal.png|}}
When it has finished running, you should see something like this:
{{:mrtools:retinotopytutorial_coranal4.png?600x600|}}
or this (viewed on the volume rather than on the EPIs):
{{:mrtools:ph_map_on_volume.png?600x600|}}
Notice that under the 'Analysis' tab on the bottom left hand corner of the mrLR screen, 'corAnal' is showing. Underneath the name of the analysis, is the name of the overlay currently showing. The correlation analysis creates three overlays:
ph: phase
co: coherence
amp: amplitude
The co map gives a sense of which voxels were responding to the visual stimulus:
{{:mrtools:co_map_overlay_retinotopy.png?600x600|}}
Now, let's try to examine single voxel responses on the map.
You can set the Overlay min to 0.4 so that you are only looking at voxels with a high correlation value.
Then turn on the voxel interrogator, by going to Plots/Interrogate Overlay. You should see something like this:
{{:mrtools:coranalysis_interrogator.png?600x600|}}
(Or like this, viewed on the EPIs:)
{{:mrtools:retinotopytutorial_coranal5.png?600x600|}}
Note that now, an Interrogator drop down has appeared in the lower left of the viewer and that it is pre-set to corAnalPlot.
When you move your mouse over the image it should now turn into a giant cross-hairs. Choose one of the voxels (the example below is voxel [27 28 17]) with the darkest red color and click. After a moment, you should see a figure like this:
{{:mrtools:retinotopytutorial_coranal6.png?600x290|}}
Looks very well modulated at the stimulus frequency! Contrast-to-noise ratio (i.e. the magnitude of the response at the task frequency (marked in red in the lower right graph, divided by the mean of the points marked in green) is over a 100-to-1.
Another voxel's data may look like this: (for voxel [35 39 17])
{{:mrtools:coranalysis_altvoxel.png?600x290|}}
==== Accessing the output ====
You can access the analysis using viewGet:
v = newView;
corAnalysis = viewGet(v,'analysis')
The analysis structure has the following fields:
curOverlay: 1
d: {[] [] []}
date: '23-Nov-2010 15:24:33'
function: 'corAnal'
groupName: 'Averages'
guiFunction: 'corAnalGUI'
mergeFunction: 'corAnalMergeParams'
name: 'corAnal'
overlayInterpFunction: 'corAnalInterp'
overlays: [1x3 struct]
params: [1x1 struct]
reconcileFunction: 'corAnalReconcileParams'
type: 'corAnal'
You can access each overlay (co, ph, and amp) using viewGet as well:
v = newView;
ov = viewGet(v,'overlay',[overlayNum],[analysisNum])
You can see which overlays are provided by the analysis as output:
>> viewGet(v,'overlayNames')
ans =
'co' 'amp' 'ph'
You can also access each overlay by name:
amp = viewGet(v,'overlay','amp')
The overlay has these fields:
amp =
alpha: 1
alphaOverlay: []
alphaOverlayExponent: 1
clip: [8.9680e-04 219.4702]
colormap: [256x3 double]
colormapType: 'normal'
data: {1x3 cell}
date: '23-Nov-2010 15:24:33'
function: 'corAnal'
groupName: 'Averages'
interrogator: 'corAnalPlot'
mergeFunction: 'corAnalMergeParams'
name: 'amp'
params: [1x1 struct]
range: [8.9680e-04 219.4702]
reconcileFunction: 'corAnalReconcileParams'
type: 'amp'
===== Event Related Analysis =====
The event-related analysis done in MLR uses deconvolution to compute the estimated response to each stimulus type. There is a [[mrTools:tutorialsEventRelated|tutorial]] that will help familiarize you with how it works.
Any event-related analysis starts with the timing of the events. There are a few different ways to get the information about timing events for your scan:
- **MGL stimfiles** If you are using the mgl task code then you will get "stim" files that are associated with each run. These contain all the information about the experiment that you ran and can be used directly by MLR to calculate stimulus times in a variety of different ways.
- **stimvols** This is perhaps the simplest way to save your stimulus times. You make a matlab cell array called stimvol and each element in the cell array is an array which contains the volume numbers when each event occurred. For example, if you had an experiment in which you had two trial types, A and B, and they occurred on stimulus volumes 1, 10, 40 and 25, 55, 75 respectively, then you would have:
stimvol = {[1 10 40],[25 55 75]};
and you would save the file as:
save stimFilename stimvol
- **Time log** These files contain a variable mylog which should have the field mylog.stimtimes_s and optionally the field mylog.stimdurations_s. The field mylog.stimtimes_s should be a cell array, with length = number of conditions. Each cell should be a vector of times, in seconds, indicating the time of the stimulus onset for that condition. The vectors do not need to be the same length (e.g. if you have different number of trials for the different conditions). Optionally, you can also specify mylog.stimdurations_s, with a duration value to correspond to each onset time for each stimulus in each condition. A sample, with two conditions, one occurring at 0, 15.3 and 32 seconds and the other at 11 and 24 seconds, would look like the following:
mylog.stimtimes_s = {[0 15.3 32],[11 24]};
save stimFilename mylog
For both **stimvols** and **Time log** you can add a variable named stimNames to the .mat files. If stimNames exists, then it will be used by eventRelatedPlot to label of different conditions. For example, if you had three different conditions named "Type 1", "Type 2" and "Type 3":
stimNames = {'Type 1' 'Type 2' 'Type 3'};
And you would save it (if it were stimvols) like:
save stimFilename stimvol stimNames
Whatever format you use for your files in you should save them in a folder named Etc in your session directory. You can then link them to each scan, using the GUI item Edit/Scan/Link Stimfile. Note that you should always link the stim files to the raw time series. You do **not** have to link them before you concatenate the time series. MLR keeps track of where each scan originally came from and works backwards to find out what the stim files are. So if you change the stimfile linkage at some later time, you do not need to relink the stim files.
Once you have the stimfile linked properly, you can run the event-related analysis. Typically, one does an event-related analysis on a concatenated time series. This is true even if you only have one scan, since the concatenate time series takes care of the high-pass filtering of your data. The event-related analysis is done by making a stimulus convolution matrix from your stimvols. You need to choose how many volumes that you want to compute out the response for (hdrlen). Then the code will create a "Toeplitz" matrix (a matrix in which the all diagonals contain the same number) where all the stimulus volumes in the first column are set to 1. This is also known as a stimulus convolution matrix. For example, if you had a scan with 10 volumes and your events happened on the 1st and 7th volume and you were computing a hemodynamic response length of 5 volumes, the matrix created would look like the following:
\left\lgroup\matrix{1& 0& 0& 0& 0&\cr
0& 1& 0& 0& 0&\cr
0& 0& 1& 0& 0&\cr
0& 0& 0& 1& 0&\cr
0& 0& 0& 0& 1&\cr
0& 0& 0& 0& 0&\cr
1& 0& 0& 0& 0&\cr
0& 1& 0& 0& 0&\cr
0& 0& 1& 0& 0&\cr
0& 0& 0& 1& 0&\cr}\right\rgroup
In general linear model language this is the "design matrix", and the solution is taken using the pseudo-inverse. The assumption that is made in this analysis is that if there is response overlay those response sum linearly; that is it is based on the assumption of temporal linearity. Evidence for temporal linear summation of BOLD responses has been reported in [[http://www.jneurosci.org/cgi/content/full/16/13/4207|Boynton, Engel, Glover and Heeger (1996) J Neurosci 16: 4207-4221]].
The code will also compute an r2 map which reports the amount of variance in the original time series that is accounted for by the event-related responses (see [[http://www.cell.com/neuron/retrieve/pii/S0896627305006100|Gardner, Sun, Waggoner, Ueno, Tanaka and Cheng (2005) Neuron 47:607-620]]).
The standard error bars for the hemodynamic responses are computed by first getting the sum of sum of squares of the residual time course and dividing that by the number of volumes minus the number of responses x the hdrlen. This residual is then distributed to each time point in the estimated response according to the inverse of the covariance of the design matrix. These errors assume that the noise is white and gaussian. After high-pass filtering to remove low frequency signal changes and drift, this assumption is not completely unreasonable (look at the frequency spectrum of the residuals, it should be flat).
==== preprocess ====
There are times when you may need to create stimvols in ways that are more complicated then a normal experiment. For example, you may want to sort your trials by subject's responses. In these cases, you can either create your own stimvol array explicitly and save that as a file that you link the scan to (see above).
You can also specify a preprocess function in the eventRelated gui:
{{:mrtools:eventrelatedguipreprocess.png|}}
The function you specify here can adjust the stimvols in any way that you want. It should be a function that takes a "d" parameter and returns a "d" parameter. Here is a very simple example which takes stimvols that specify a set of different stimuli (i.e. is a cell array of k different arrays specifying stimvols for each stimulus condition) and collapses that into a single type (i.e. creates stimvols that will calculate the response across all different stimulus types):
function d = mypreprocess(d)
d.stimvol = cellArray(cell2mat(d.stimvol));
In principle, you can do whatever processing is necessary to create a stimvol field that you need. The d structure that is passed in contains information about your scan. It looks like:
d =
ver: 4.5
scanNum: 1
groupNum: 3
description: 'Concatenation of MotionComp:2 3 4 5 6'
tr: 1.2
voxelSize: [3 3 3.00000047683716]
xform: [4x4 double]
nFrames: 1875
dim: [64 64 22 1875]
filename: 'tseries-080222-192610.img'
filepath: [1x88 char]
expname: 'jg060918_mr4'
fullpath: '/Volumes/eigenraid/data/nyu'
data: [4-D double]
junkFrames: [0 0 0 0 0]
dicom: {[1x1 struct] [1x1 struct] [1x1 struct] [1x1 struct] [1x1 struct]}
stimfile: {[1x1 struct] [1x1 struct] [1x1 struct] [1x1 struct] [1x1 struct]}
concatInfo: [1x1 struct]
varname: [1x1 struct]
stimNames: {'1' '2' '3'}
stimvol: {[1x88 double] [1x88 double] [1x88 double]}
supersampling: 1
===== GLM Analysis =====
==== Create stimFiles ====
To run the GLM analysis, you need to create stimulus files that have the timing information for your experiment.
You should have a separate stimFile for each scan. See above under [[:mrtools:analyses#Event Related Analysis|Event Related Analysis]] for a description of possible formats.
Just like for the Event Related Analysis, you then need to let mrSession know that these stimFiles have been defined. This can be done through the GUI menu (Edit/Scan/Link Stimfile). You can manually link each scan to a matlab .mat file (make sure to save mrSession afterwards). Alternatively, this can be scripted using a code along the lines of:
mrGlobals, view = newView;
groupNum = viewGet(view,'groupNum','MotionComp'); % or whichever group you're concatenating
numScans = viewGet(view,'nScans',groupNum);
for iScan = 1:numScans
stimFilename = ['stimFiles/stimFile_' num2str(iScan) '.mat'];
view = viewSet(view,'stimFilename',stimFilename,iScan,groupNum);
end
saveSession;
(Note that if the 1st stimFile goes with the 2nd EPI file, you need to adjust the indexing appropriately.)
==== Concatenate your data ====
You can either do this through the GUI or in a script. See details above for more info about concatenating.
In the GUI: choose Analysis -> Concatenate Time Series
To script: You have to set the parameters and then call the code. For example:
mrGlobals, view = newView;
params.groupName = 'MotionComp'; % sets which group to take the scans from
params.newGroupName = 'Concatenation'; % sets the name of the group that results from the concatenation
params.description = 'Concatenation of [x...x]';
params.filterType = 1;
params.filterCutoff = 0.0100;
params.percentSignal = 1;
params.projectOutMeanVector = 0; % explicitly specify that you do/do not want to do this
params.warp = 0; % should set warp to 1 if combining across days
params.warpInterpMethod = []; % choose (e.g., 'linear') if combining across days
params.scanList = [1:viewGet(view,'numscans',viewGet(view,'groupNum',params.groupName))];
view = concatTSeries(view,params); %actually do the concatenation
==== Run the glm analysis ====
You can either do this through the GUI or in a script.
If you want to adjust the analysis code to the needs of your particular situation, you need to do it in a script, not the GUI, but you can use the code that the GUI uses as a guide.
The outcome of this step will be an overlay (or multiple overlays, if you want) that can be viewerd in the GUI.
===== Time Series Statistics =====
This computes several maps/overlays: mean over time, median over time, std dev over time, max frame-to-frame difference, max difference between each frame and the median, and the mean divided by the stdev.
The structure can be accessed using viewGet, and has the following fields:
>> viewGet(v,'analysis')
ans =
curOverlay: 4
d: []
date: '16-Nov-2010 12:48:37'
function: 'timeSeriesStats'
groupName: 'Raw'
guiFunction: 'timeSeriesStatsGUI'
mergeFunction: 'defaultMergeParams'
name: 'timeSeriesStats'
overlays: [1x6 struct]
params: [1x1 struct]
reconcileFunction: 'defaultReconcileParams'
type: 'timeSeriesStats'
Individual overlays can be accessed by name, using viewGet:
>> viewGet(v,'overlay','mean')
ans =
alpha: 1
alphaOverlay: []
alphaOverlayExponent: 1
clip: [0 3.1737e+04]
colormap: [256x3 double]
colormapType: 'normal'
data: {1x10 cell}
date: '16-Nov-2010 12:48:37'
function: 'timeSeriesStats'
groupName: 'Raw'
interrogator: 'timeSeriesStatsPlot'
mergeFunction: 'defaultMergeParams'
name: 'mean'
params: [1x1 struct]
range: [0 3.1737e+04]
reconcileFunction: 'defaultReconcileParams'
type: 'mean'
When the timeSeriesStats is loaded, you should see something like this:
{{:mrtools:timeseriesstats_meanoverstd.png?600x600|}}
* The mean map (and the mean/std map) of a successfully motion compensated group should line up well with the inplanes.
It is possible to check this by moving through successive scans (with the scan slider). They all should line up with one another.
* The median map should look similar to the mean map.
* The std map is a cheap way to find active brain regions without any knowledge of the experimental protocol.
{{:mrtools:timeseriesstats_better_stdevoverlay.png?600x600|}}
* If you go to Plots/Interrogate Overlay, the interrogator in the lower left hand corner will come up with a default plotter. When you select a voxel using the cross-hairs, you should see something like this:
{{:mrtools:timeseriesstats_interrogator.png?600x300|}}
* The max-frame-diff and max-median-diff maps are also useful for checking the motion compensation. Before motion compensation, these maps should show large values near the gray/white and gray/CSF boundaries because there are large intensity fluctuations over time due to motion. After motion compensation this "edge effects" should be largely eliminated.
====== Coordinate Transforms ======
Coordinate transforms lie at the heart of how MLR displays data on different base anatomies and how it keeps track of the location of ROIs. MLR coordinate transforms are based on the transformation information that is part of the [[http://nifti.nimh.nih.gov/|Nifti]] image standard. You may want to also read the [[:mrTools:mrAlign|mrAlign]] manual.
===== Nifti Qform and Sform =====
Nifti specifies two coordinate transformations in the header of every image file:
* **Qform** is the coordinate transformation that takes the coordinates of the image into the coordinates of the magnet. It should be set in the data by the MRI image acquisition software.
* **Sform** in the context of MLR is a coordinate transformation that aligns the images to a standard reference, in our case the volume anatomy for the subject. It is usually set by mrAlign.
Both of these transformations have an associated code, qform_code or sform_code, that in MLR can be any one of:
* **0** the transformation has never been set
* **1** a transformation into magnet coordinates
* **3** a transformation into Talairach coordinates
So, when your images come off of the magnet, they should start out with having their Qform set. This transformation then tells how to go from the coordinates of the image into the coordinates of the magnet.
What does this mean?
Well, if you have an image, then each voxel is labeled according to the [Ximg Yimg Zimg] location of where it is in the matrix specifying the image:
{{:mrtools:matlab_coordinates.gif|}}
The Qform gives us a transformation matrix (though internally stored as a quaternion), that can convert the [Ximg Yimg Zimg] coordinate of the voxel in the image to the location in mm from the center of the bore [Xmag Ymag Zmag] where the center of that voxel lies.
{{:mrtools:varian_magnet_with_coordinate_axis.gif|}}
Note how the magnet coordinates are in neurological coordinates. That is, the x dimension is such that the right of the brain is in the right of the image. Actually, it is more descriptive to say that the magnet coordinates use the LPI convention, because the first (x) dimension goes from Left to right, the second (y) dimension goes from Posterior to anterior and the third (z) dimension goes from Inferior to superior.
The transformation from image coordinates to magnet coordinats is done by multiplying the 4x4 affine transformation matrix specified by the Qform times the image coordinates [Ximg Yimg Zimg]:
\left\lgroup\matrix{x_{mag}\cr y_{mag}\cr z_{mag}\cr1\cr}\right\rgroup
= \left\lgroup\matrix{rs_{11}& rs_{12} &rs_{13} &t_{x_{mm}}\cr
rs_{21}& rs_{22} &rs_{23} &t_{y_{mm}}\cr
rs_{31}& rs_{32} &rs_{33} &t_{z_{mm}}\cr
0 & 0 &0 &1\cr}\right\rgroup.
\left\lgroup\matrix{x_{img}\cr y_{img}\cr z_{img}\cr1\cr}\right\rgroup
The Qform above is composed of a 3x3 portion that rotates and scales (specified by the rs's) and a 3x1 portion that translates (specified by the t's). Note that the coordinates have been made into **homogeneous** coordinates. This means that they have an added 1. This is a trick, the purpose of which is to allow us to do the translation and rotation in a single matrix multiplication.
The above Qform is really the combination of three transformations. A translation which specifies the distance in number of voxels to the center of the voxel at location [1 1 1]. A scaling which specifies the distance in mm between centers of voxels (this is usually the same as your voxel dimensions, unless, for instance, you put a gap between slices). And a rotation around the center of the bore of the magnet.
\left\lgroup\matrix{rs_{11}& rs_{12} &rs_{13} &t_{x_{mm}}\cr
rs_{21}& rs_{22} &rs_{23} &t_{y_{mm}}\cr
rs_{31}& rs_{32} &rs_{33} &t_{z_{mm}}\cr
0 & 0 &0 &1\cr}\right\rgroup
= \left\lgroup\matrix{r_{11} &r_{12} &r_{13} &0\cr
r_{21} &r_{22} &r_{23} &0\cr
r_{31} &r_{32} &r_{33} &0\cr
0 & 0 &0 &1\cr}\right\rgroup.
\left\lgroup\matrix{s_{x} &0 &0 &0\cr
0 &s_{y} &0 &0\cr
0 &0 &s_{z} &0\cr
0 & 0 &0 &1\cr}\right\rgroup.
\left\lgroup\matrix{1 &0 &0 &t_{x_{img}}\cr
0 &1 &0 &t_{y_{img}}\cr
0 &0 &1 &t_{z_{img}}\cr
0 & 0 &0 &1\cr}\right\rgroup
The units of the tximg are in voxels, the s's are in mm and the tmm are in mm in the rotated reference frame.
There is one little hitch with this. The Qform specifies the transformation where the image coordinates specify the origin as (0,0,0) rather than (1,1,1) which is what Matlab uses. So, actually to get into the coordinates of the magnet from the coordinates in the image, you need to shift the origin by -1 first:
\left\lgroup\matrix{x_{mag}\cr y_{mag}\cr z_{mag}\cr1\cr}\right\rgroup
=
\left\lgroup\matrix{rs_{11}& rs_{12} &rs_{13} &t_{x_{mm}}\cr
rs_{21}& rs_{22} &rs_{23} &t_{y_{mm}}\cr
rs_{31}& rs_{32} &rs_{33} &t_{z_{mm}}\cr
0 & 0 &0 &1\cr}\right\rgroup.
\left\lgroup\matrix{1& 0 &0 &-1\cr
0& 1 &0 &-1\cr
0& 0 &1 &-1\cr
0& 0 &0 &1\cr}\right\rgroup.
\left\lgroup\matrix{x_{img}\cr y_{img}\cr z_{img}\cr1\cr}\right\rgroup
But, this is a detail. So we'll ignore it below.
===== Aligning across sessions =====
So, now we know how to transform the coordinates of an image into the magnet coordinates; we can now transform from one images coordinates into another images coordinates as long as they are collected on the same day. For example, if we want to find the location of a voxel in our inplane anatomy in the EPI image collected from the same day, we can transform **through** the magnet coordinates:
{{:mrtools:coordinatetransforms_within_day_xform.gif|}}
The Qform of the inplane (Qforminplane) transforms into the magnet coordinates and the **inverse** of the QformEPI takes you from the magnet coordinates to the EPI image coordinates:
\left\lgroup\matrix{x_{EPI}\cr y_{EPI}\cr z_{EPI}\cr1\cr}\right\rgroup
= \left\lgroup\matrix{Qform_{EPI}\cr}\right\rgroup^{-1}.
\left\lgroup\matrix{Qform_{inplane}\cr}\right\rgroup.
\left\lgroup\matrix{x_{inplane}\cr y_{inplane}\cr z_{inplane}\cr1\cr}\right\rgroup
Note that this does not assume that the inplane and EPI are taken with the same slice prescriptions. Since it uses the Qforms set by the magnet software, it will transform from inplane coordinates to EPI coordinates regardless of how the slices were placed in the inplane and EPI images. That is, as long as the subject didn't move their head in between the scans.
Now, what happens when we want to find where an inplane voxel comes from in the volume anatomy that was taken during a different session where the subject was not placed in exactly the same location in the magnet. In this case we have:
{{:mrtools:coordinatetransforms_across_sessions1.gif|}}
In this case, while the Qforminplane and the Qformvolume take the inplane and volume coordinates into the magnet coordinates, there is a problem: the subject's head was **not** in the same location on the two days that the inplane and volume images were taken. Thus, we can't compute the transformation from the inplane to the volume coordinates. We need an alignment:
{{:mrtools:coordinatetransforms_across_sessions3.gif|}}
mrAlign computes the alignment between the inplane images and the volume images based on trying to find the best match between the two images. A good starting place for this, is to initialize the alignment from the headers, which means to use the Qform information. As noted above this won't give a perfect alignment because the subject's head was in a different location on the two days. But, it's a good starting place and mrAlign will find a good alignment from there. Now, we know the alignment from the inplane to the volume and from the volume to the magnet on the day the volume was taken (Qformvolume). So, we can compute the transformation from the inplane to the magnet coordinates **as if the head were in the same place on the day we acquired the inplane as the day we acquired the volume**. mrAlign puts that transform into the Sforminplane.
\left\lgroup\matrix{Sform_{inplane}\cr}\right\rgroup
= \left\lgroup\matrix{Qform_{volume}\cr}\right\rgroup.
\left\lgroup\matrix{Alignment\cr}\right\rgroup
So, the Sform contains the transformation of an image into the magnet space as if the head were in the same place as the day that the volume anatomy was taken. The Sformvolume is simply set to the Qformvolume since it already specifies the transformation of the volume coordinates to the magnet coordinates on the day that the volume was taken. (This is what mrAlign does when you set "Set Base Coordinate Frame" in the File menu).
So, now that we have the Sforms set, the transformation from the inplane to the volume is easy. It looks like this:
{{:mrtools:coordinatetransforms_across_sessions2.gif|}}
So as long as the Sform is set in the same way, we can now compute the transformation from the inplane image coordinates to the volume coordinates. That is, to find which voxel in the volume ([Xvolume Yvolume Zvolume]) the voxel from the inplane image [Xinplane Yinplane Zinplane] comes from, the following computation is done:
\left\lgroup\matrix{x_{volume}\cr y_{volume}\cr z_{volume}\cr1\cr}\right\rgroup
= \left\lgroup\matrix{Sform_{volume}\cr}\right\rgroup^{-1}.
\left\lgroup\matrix{Sform_{inplane}\cr}\right\rgroup.
\left\lgroup\matrix{x_{inplane}\cr y_{inplane}\cr z_{inplane}\cr1\cr}\right\rgroup
You should never actually have to work with the Qforms and Sforms directly, though. If you ever need to explicitly do transformations, you should get the transformation matrices using viewGet. MLR will compute the correct transformations as discussed above using the Sforms. It will also take care of the shift by 1 (to account for Matlab's 1 based numbering system and the Qform/Sform's 0 based numbering system). There are transformations base2scan and scan2scan among many others. If you need the scan2base instead of the base2scan, simply use inv(base2scan). So to transform a coordinate in your base [X Y Z] to find its coordinate in the scan, you would do:
base2scan = viewGet(v,'base2scan');
scanCoord = base2scan * [X Y Z 1]';
You may want to round the scanCoords to the nearest voxel. Note that if you want to transform a lot of coordinates at the same time, X Y and Z should be **row** vectors (where the length is the number of coordinates). For example, to convert the points (1,5,1), (12,10,3) and (24,30,5):
X = [1 12 24];
Y = [5 10 30];
Z = [1 3 5];
baseCoords = [X;Y;Z;ones(1,length(X))];
base2scan = viewGet(v,'base2scan');
scanCoords = base2scan * baseCoords;
All coordinate transformations in MLR are handled in a similar way, be they from a base anatomy coordinate to a scan (epi) coordinate, or even from one scan coordinate collected on one day to a scan coordinate collected on a different day. ROIs inherit the Sform (and the voxel size) of the base anatomy that they were defined on, and then from there the transformations work in an analogous way. If you want to get ROI coordinates in the scan or in a base anatomy, you should always use MLR functions [[:mrTools:functionReference#loadROITSeries|loadROITSeries]] or getROICoordinates which use the function xformROIcoords to transform the ROI coordinates. This is because xformROIcoords does some extra processing to try to maintain ROI volume across different voxel sizes.
-jg.
====== Talairach ======
If a base anatomy has a Talairach transformation assigned to it, mrTools will display Talairach coordinates when that base is being viewed. In addition, any ROIs defined on a base with Talairach can be aligned to ROIs from any other brain, on the basis of the Talairach values. mrTools does all of this automatically, and gives specific warnings and instructions so that you are aware if this is happening.
=====Overview=====
The way we've implemented the use of registration to the Talairach coordinate system is by keeping track of a single transform. We don't currently support anything other than a single transformation of the whole brain. Once a transform is defined for a given brain, (see below) it can be saved to the base volume for that brain and then it will be inherited by everything aligned to or defined on that base volume. In this way, scans and ROIs from different subjects can be aligned with one another by transforming to Talairach space instead of magnet space. This happens behind the scenes, so that scan2scan and base2base and roi2roi are all computed using the Talairach transform if it's available.
==== To assign a Talairach transformation to a volume anatomy, you will need to load it in mrAlign, and then follow these steps: ====
- Load volume anatomy in mrAlign
- In the mrAlign menu bar, pull down the 'Talairach' tab and select 'Set Talairach Transform'
- This will open four windows: three volume views (sagittal, coronal, axial), and a point-setting command center. For detailed instructions on how to use this interface, go to [[talairachcoordinates|How to Set Talairach Coordinates]]
- Using the 3 viewing windows, determine the location of the anchoring points (e.g., AC, PC, SAC, etc). I find [[http://www.med.harvard.edu/AANLIB/cases/caseNA/pb9.htm|this website]] helpful for determining the correct location of AC and PC.
- For each anchor point, when you are happy with your choice, pull down the 'setPoint' menu and select the point you have assigned.
- For choosing AAC and PPC, and LAC and RAC, it is useful to use the displayPlane and straighten buttons so that you are viewing the volume in the corrected AC/PC plane. [NOTE: To start over without saving, hit 'Cancel']
- When you're done, hit 'OK.' This automatically saves the rotation matrix that has been calculated based on the anchor points. The rotation matrix is saved in the .mat file that exists for every base anatomy. As the inplane anatomies and functional scans are aligned to this volume, the correct rotation matrix will be inherited at each step.
- If you need to adjust your Talairach points at a later date, follow steps 1-7. Then, under the Talairach file menu, choose 'Export Talairach.' This will allow you to correct the transformation for any files that have been created based on the volume you have changed.
===== How the chosen points are used by mrLoadRet =====
We keep everything in the base structure, and we don't change the NIFTI headers, e.g., we do not re-set sform code to 3, we do not actually rotate the volume, and we do not change the s-form. Rather, we leave the sform code as 1, and save talXform to the base structure to be used for viewing or aligning.
See [[talequation|here]] for details of how talXform is computed, from points chosen by the user.
======Troubleshooting======
===== Volume Anatomy Rotations (Order of Dimensions) =====
If you think something might be going wrong because the voxels in your volume anatomy aren't in the right order, [[checkVoxOrder|here]] are some instructions to help you figure it out.
====== Surfaces and Flat Maps ======
This section will give you an overview of how surfaces and flat maps work in MLR. You may wish to start by going through the [[:mrTools:TutorialsSurfacesAndFlatMaps|Surfaces and Flat Maps Tutorial]] before going through this page.
It is important to note that MLR is **not** a tool for segmenting 3D volumes and generating surfaces. You can use any tool that is capable of generating an outer (pial) surface and inner (boundary between gray and white matter) surface. If you have these two files, then you will be able to use the full functionality of surface and flat map visualization in MLR. We typically use [[http://surfer.nmr.mgh.harvard.edu|FreeSurfer]] (see [[:mrtools:howto#use_free_surfer_with_mlr|here]] for how to import) to generate segmentations. Jonas Larsson's [[http://www.pc.rhul.ac.uk/staff/J.Larsson/software.html|SurfRelax]] and Caret (see [[:mrtools:howto#use_caraet_with_mlr|here]] to import) can also be used. Caret, however, only generates a mid-cortical surface and not both the inner and outer surfaces that are useful for displaying data at different cortical depths.
3D surfaces and flat maps are used to display fMRI data from the gray matter of the cortex. They are a helpful visualization tool. But, it is important to understand that, surfaces and flat maps are **just** a visualization tool. MLR always keeps your data analyzed in the way it was collected -- in the coordinates of the original EPI images. Whenever you view data on a surface or a flat map, MLR is transforming your already analyzed data on the fly.
===== Surface primer =====
To understand how this all works, let's start with a primer on surfaces.
Surfaces start with segmentations. A segmentation refers to segmenting the gray matter from the white matter of the cortex. For example, the image below shows an example of a segmentation generated by SurfRelax:
{{:mrtools:surftutorial_import4.png|}}
Note how the yellow dots follow the outer surface of the gray matter and the white dots follow the inner surface of the cortex. How well the segmentation is done is absolutely crucial for how well the surface visualization works. If the segmentation is poor, you might be viewing data not from the gray matter but from the white matter instead. It is thus always a good idea to carefully examine the segmentations that you have generated with outside tools using the flat and surface viewers (see the [[:mrTools:tutorialsSurfacesAndFlatMaps|tutorial]] for more info on this). If you think you have a bad segmentation (i.e. the yellow and white dots don't follow the outer and inner surfaces of the gray matter), it is a very good idea to go back to your segmentation software and tweak parameters until you get a good segmentation.
Once a segmentation is done, your surface generation software will generate a 3D surface out of the segmentations. Here is one that SurfRelax created:
{{:mrtools:surfacesandflatmaps2.png|}}
While the image above makes it look like we have a very smooth surface, in actuality we have a surface that is composed of many, many flat little triangles. It is easiest to see this if we look at a zoomed up view of the same surface, and draw the edges of these triangles:
{{:mrtools:surfacesandflatmaps1.png|}}
Each of the little triangles you can see are specified by the coordinates of their vertices. To see this, we zoom in a bit further and show you the coordinates of the three vertices of one particular triangle:
{{:mrtools:surfacesandflatmaps3.png|}}
Note that the x,y,z coordinates of each vertex is the actual location of that particular vertex in the volume from which the surface was created. That is, in the first image shown above which shows the segmentation as yellow and white dots, each one of those dots is a vertex. The location of the vertex is specified as the coordinates in that 3D volume image. This is important! To be clear, the 3D volume might be an image that is 256x256x176 (for instance). Then the x,y,z coordinates of each vertex is an actual voxel location within that 3D image from where it came from. Note that the x,y,z coordinates can be a fractional number which specifies a location within a voxel.
So, the specification of the surface involves two pieces of information. One is a list of all the vertices x,y,z locations in the 3D volume. The second is a list of which sets of three vertices are connected in a triangle.
So, for example if we load a surface up from the [[:mrTools:tutorialsSurfacesAndFlatMaps|tutorial]], you will see that we get both these sets of information:
>> surf = loadSurfOFF('leftInner.off')
surf =
filename: 'leftInner.off'
Nvtcs: 145596
Ntris: 291188
Nedges: 436782
vtcs: [145596x3 double]
tris: [291188x3 double]
The vtcs are a list of 145596 vertices (numbered in order) with an x,y and z coordinate. So, for example, the first vertex has a location in the 3D volume that looks like this:
>> surf.vtcs(1,:)
ans =
74.8006362915039 62.4711685180664 73.3120651245117
And, the triangles are listed as which three vertices (remember the vertices are numbered in order) that are connected to each other. So, for example, the first triangle connects the first three vertices together:
>> surf.tris(1,:)
ans =
1 2 3
Note that there is a convention to how the vertices are ordered. The convention adopted by SurfRelax is that the vertices are ordered in clockwise order when viewed from the outside. If your segmentation software follows a different convention, then it is possible that flat maps will be shown in the wrong orientation (that is, you need to xy flip them so that right is right and up is up).
There is one final piece of information that is useful for drawing the surfaces. That is what color each vertex is. Typically for our surfaces, we use the curvature. The curvature is what it sounds like, how curved the surface is and in which direction (convex or concave). Technically, we are computing the [[http://en.wikipedia.org/wiki/Mean_curvature|mean curvature]] of the surface. We use the function calcCurvature to compute a curvature file which simply is a grayscale value that tells us how to color each vertex in the surface. There are as many values of the curvature as there are vertices. Each triangle of the surface is then colored in such a way that each vertex is colored with its specific curvature value and parts of the triangle in between vertices are interpolated between these colors. If you have done the [[:mrTools:tutorialsSurfacesAndFlatMaps|tutorial]], you can load in the computed curvature. We also clip the color values of the curvature, so that it will display nicely.
>> curv = loadVFF('leftCurvature.vff');
>> length(curv)
ans =
145596
>> curv(curv<-1.2) = -1.2;curv(curv>1.2) = 1.2;
So now we have all the information that defines the surface: The vertices with their 3D coordinates in the volume, the triangles which specify which vertices are connected together, and the curvature which tells us how to color each vertex. You can easily display a surface like this in Matlab:
>> patch('vertices', surf.vtcs, 'faces', surf.tris,'FaceVertexCData',curv','facecolor','interp','edgecolor','none');
>> colormap(gray);
So, your segmentation and surface generation software (e.g. FreeSurfer or SurfRelax) needs to create two surfaces like the ones described above. It is important to note that the surfaces are in register, in the sense that they have the same number of vertices and triangles. The only difference between the two surfaces is the location of the vertices (either on the pial or gray/white matter boundary).
Any program that can generate the outer and inner surfaces as described above, can be used to generate the types of surfaces that are used in MLR. All you need is to make sure that these two surfaces are in ".off" format, which is a fairly simple format that can be written out of matlab using our function writeOFF.
Note, that SurfRelax uses .off files as it's native format. If you use Free Surfer, you can use the function [[:mrtools:howto#Use_Free_Surfer_with_MLR|mlrImportFreeSurfer]] to convert Free Surfer files to .off. If you are using other software and you read in your surfaces to Matlab, you should be able to write them out in .off format using the function writeOFF.
===== Surface Base Anatomies =====
Now that you understand how surfaces are represented as a set of vertices and triangles, we can explain how MLR uses this information to display your data on a surface.
It's really simple.
The vertices in each surface are the x,y,z location in the 3D volume anatomy from which it was created. MLR using the Nifti Sform matrices of your 3D volume anatomy and the EPI images, can compute the transformation that takes you from the volume anatomy to the corresponding point in the EPI image. We call this the base2scan transformation matrix, and it can be retrieved from MLR using viewGet(v,'base2scan').
So, to color the surface with an overlay, MLR multiplies each vertex coordinate with the base2scan transformation and finds the position in the scan that is associated with that vertex, and then colors that vertex according to the color of the overlay at that position.
Now what is important to understand is that when we display surfaces there are really two different kinds of surfaces that are important. Display surfaces and coordinate surfaces. The coordinate surface is the one just described above. It is the one that tells us where each vertex that we are displaying is in the 3D volume. Often, when we display our data, the display surface will be one and the same as the coordinate surface. But, it doesn't have to be the case. For example, if we have an inflated surface (that is in register with the coordinate surface, i.e. has corresponding vertices and triangles), then the vertices of the inflated surface are in a different location than the coordinate surface (i.e. they are inflated). That way the surface we display is inflated, but the coordinates that are used to get the data come from the right part of the gray matter.
We also keep track of two depths of cortex, the outer and inner layers. That way, when we set the cortical depth slider to 1, we can display the outer display surface (using it's associated outer coordinate surface), and when we set the depth slider to 0, we can display the inner display surface (using it's associated inner coordinate surface). When we set the slider in between 0 and 1, we take an appropriate linear interpolation between the inner and outer surfaces.
==== Import a Surface ====
So, the first time you want to view a surface in MLR, you can import the surface from the two .off files for the outer and inner surfaces. Use:
File -> Base Anatomy -> Import Surface
Choose the outer (GM) .off file from your volume anatomy folder (which can be set in Edit -> Preferences)
This brings up a GUI that looks like this:
{{:mrtools:surftutorial_import2.png|}}
This allows you to select which surfaces to use for what. The outerSurface and innerSurface fields refer to the **display** surfaces. The outerCoords and innerCoords refer to the **coordinate** surfaces as explained above. The curv is the curvature and the anatomy file is the 3D anatomy that was used to generate the surface. It is important to specify the anatomy file since it contains the coordinate transform that is used for displaying data on the surface.
The whichSurface field at the top is used to switch what surface is displayed in the viewer while you are selecting files. It does not affect the final surface that is generated for MLR. It is always a good idea to go and inspect the 3D anatomy with the whichSurface selector to make sure that the inner and outer coords are well matched to the location of the inner and outer surface of the gray matter.
Here are two examples of settings used in this dialog:
* **Load a folded surface** Set both the Outer Surface and the Outer Surface Coordinates to the outer (GM) .off file, and set both the Inner Surface and the Inner Surface Coordinates to the inner (WM) .off file. Note that you always want to have the coordinates be the uninflated surfaces.
* **Load an inflated surface** To look at an inflated surface, set the Outer Surface to the inflated.off file instead of the outer (GM) .off file. The Outer Surface Coordinates must still be set to the outer (GM) .off file. The Inner Surface and Inner Surface Coordinates should both be set to the inner (WM) .off file. Now, when you move through the cortical depth, you will be moving from inflated to folded as well as from the outer to the inner surface.
Once you have imported a surface, and you are happy with it, then go ahead and save it. The next time you want to view the surface, you can simply open it up as any other Base Anatomy using File/Base Anatomy/Load.
Internally, MLR keeps all the information related to a surface as a combination of a Nifti image and a Matlab base structure. The Nifti image simply contains the curvature image. It is a large 1 dimensional image where each point in the image is the curvature of a single vertex. The matlab base structure contains various information about the anatomy (it is common to all base anatomies) including user settings, and looks like the following:
>> base = viewGet(getMLRView,'base')
base =
clip: [-1.5 1.5]
coordMap: [1x1 struct]
curSlice: 1
data: [1x145596 double]
gamma: 1
hdr: [1x1 struct]
name: 'leftOuter'
permutationMatrix: [3x3 double]
range: [-1.5 1.5]
rotate: 0
sliceOrientation: 1
talPoints: []
tilt: 0
type: 2
vol2mag: [4x4 double]
vol2tal: []
Note that the base.type = 2, which means that this base is a surface. The field coordMap contains all the information about the vertices and triangles:
>> base.coordMap
ans =
path: '/Volumes/drobo/data/nyu/surfTutorial/Segmentation'
innerSurfaceFileName: 'leftInner.off'
innerCoordsFileName: 'leftInner.off'
outerSurfaceFileName: 'leftOuter.off'
outerCoordsFileName: 'leftOuter.off'
curvFileName: 'leftCurvature.vff'
anatFileName: 'volume.hdr'
innerCoords: [4-D double]
innerVtcs: [145596x3 double]
tris: [291188x3 double]
outer: 'leftOuter.off'
outerCoords: [4-D double]
outerVtcs: [145596x3 double]
coords: [4-D double]
dims: [256 176 256]
rotate: 0
The outerVtcs and innerVtcs are the display vertices. The outerCoords and innerCoords are the coordinate vertices. The tris are the triangles (note that because the inner and outer surfaces are in register, we only need to save one version of the tris). The coords are the current coordinates, which are computed as the linear interpolation between the outer and inner coords depending on how the cortical depth slider works. Also, the names of the files and path are stored in the structure so that when you call the mrSurfViewer from Plots/Surface Viewer, the correct files can be loaded up. If the path of the files has changed, MLR will ask you to specify the new path when you open up mrSurfViewer.
===== Flat Base Anatomies =====
Flat patches are based on the same principles as the surfaces (and are generated from the surfaces), but being 2D are treated somewhat specially by MLR.
A flat patch starts out as some small part of the surface that is cut out:
{{:mrtools:surfacesandflatmaps4.png|}}
Then, flattening algorithms can be used to flatten this representation into 2 dimensions. There are different flavors of algorithms that do this, but in general they can be thought of as optimization routines that try to place all the vertices on the same 2D plane, while trying to minimize the amount of distortions caused by the flattening process (i.e. changes in the relative distance between vertices). We have setup MLR to use either the [[http://white.stanford.edu/Software.php|Stanford VISTA mrFlatMesh]] tools or [[http://www.pc.rhul.ac.uk/staff/J.Larsson/software.html|Surf Relax]]. The flattened patch ends up looking something like this:
{{:mrtools:surfacesandflatmaps5.png|}}
Two things have changed from the previous view. One is that the smooth curvature values have now been thresholded so that there are only two colors being displayed -- this is just done to make it look pretty. The second is that the cutout patch is now a 2D image rather than a 3D surface. We have actually changed the 2D flat surface into an image, which can be saved as a Nifti file. The data structure that MLR uses is very similar to that of a surface:
>> base = viewGet(getMLRView,'base')
base =
clip: [0 1.5]
coordMap: [1x1 struct]
curSlice: 1
data: [301x301 double]
gamma: 1
hdr: [1x1 struct]
name: 'jg_left_occipital'
permutationMatrix: [3x3 double]
range: [0 0.75]
rotate: 63.499996
sliceOrientation: 1
talPoints: []
tilt: 0
type: 1
vol2mag: []
vol2tal: []
>> base.coordMap
ans =
path: '/Volumes/CBIV2/Heegerlab/anatomy/jg19710606/jg041001ver2'
flatFileName: 'jg_left_occipital.off'
innerCoordsFileName: 'jg_left_WM.off'
outerCoordsFileName: 'jg_left_GM.off'
curvFileName: 'jg_left_WM_Curv.vff'
anatFileName: 'jg041001.hdr'
coords: [4-D double]
innerCoords: [4-D double]
outerCoords: [4-D double]
dims: [176 256 256]
Note a few things from the above. The base.type = 1, which indicates this is a flat map. In the field base.data is the 301x301 image for this particular flat map. It is what we see in the viewer window. There is nothing special about the size 301x301, it just so happens that when we flattened the surface and set it to have a resolution of 2, it turned into a 301x301 sized image. If we had set the resolution to 4, the resolution would be 602x602. Now, any given point on that image is associated with some point in the 3D volume. That is known from the coords field of the base structure. Note the size of this field:
>> size(base.coordMap.coords)
ans =
301 301 1 3
It has 301x301 because it is the same dimensions as the flat map image. The last 3 is that each point in the 2D image is associated with an x,y,z point from the original 3D volume. The 1 can be ignored (it is a place holder for flat maps with multiple "slices", but is not being used yet). For example, the point in the center of the flat map (150,150) is associated with the 3D volume coordinate 73.6, 51.6, 95.1:
>> base.coordMap.coords(150,150,1,:)
ans(:,:,1,1) =
73.6470479050564
ans(:,:,1,2) =
51.642434868004
ans(:,:,1,3) =
95.1174479268083
So what MLR does to display the overlay for each point in the 2D image, is to look up its coordinates in the base.coordMap.coords field to get the x,y,z of that point in the 3D anatomy. It then uses the base2scan transformation like for the surfaces to find the point in the scan where that point comes from, and the displays the color of that point in the image according to the overlay that is being displayed.
Note that in the coordMap there are three coordinate fields. The coords, innerCoords and outerCoords. The inner and outer Coords come from the inner and outer coordinate surfaces that were used when generating the flat map and represent where each point on the flat map came from the original surfaces. The coords field itself is a derived field which changes depending on how the cortical depth slider is set. If the slider is set to 1, it is equal to the outerCoords, if the slider is set to 0 it is equal to the innerCoords and if the slider is set in between it is an appropriate linear interpolation between the inner and outer coords.
==== makeFlat ====
The easiest way to generate a flat map is to use the Interrogator makeFlat. You can use this interrogator from any base anatomy. To turn it on, go to Plot -> Interrogate Overlay. Then, select makeFlat from the drop-down menu at the bottom of the viewer, such that makeFlat appears in the interrogator function text box.
This will allow you to make a flat patch whose center is specified by whatever voxel you click on. A GUI will pop up that has that voxel's coordinates entered as the center point for the flat map. You can specify the radius, the resolution (2 is usually fine), and the method. The method can be either SurfRelax or mrFlatMesh. To use SurfRelax you need to have it installed. mrFlatMesh is available as part of the MLR distribution. Note that SurfRelax sometimes gets stuck making patches. If this happens, choosing a larger radius can help, but you may have to go through many iterations to get it to work. mrFlatMesh always makes a usable patch.
If you are looking at a flat patch, and realize you need the patch to be bigger or recentered, simply click on the voxel you would like to use as the center for the new patch. This will bring up the same GUI as above.
==== Saving a Flat Patch ====
The patches you have created are automatically saved as .off files in the anatomy folder [check if this is the anatomy subfolder of the current MLR directory, or the Anatomy folder that has the original .off files].
Once you have a patch you like, that you are going to want to use again, you should save it as a base anatomy. To do this, you can simply do: File -> Base Anatomy -> Save, and this will save in the 'anatomy' subfolder of the current mrLoadRet directory.
Alternatively, you can do: File -> Base Anatomy -> Save As, which will allow you to rename the flat patch, as well as save it in the Anatomy directory for that subject. This makes it easier to re-use the same flat patch for that subject across scanning sessions.
==== Loading a Flat Patch ====
Once you have flat patches that you like that you have saved using File -> Base Anatomy -> Save/Save As you can simply load them in other sessions.
=== Loading a Flat Patch from an .off file ===
If you have a flat patch generated outside of MLR (from SurfRelax for instance) you can import it form File -> Base Anatomy -> Import Flat. This gives you the option to use a higher resolution in recalculating the grid from the .off file. Once you're happy with it, you can save it as a Base Anatomy as described above.
=== Using the Flat Viewer ===
When viewing the mrLoadRet compatible gridded flat patch, you have the option to look at the underlying .off patch using the Flat Viewer, which gives you many nice tools, such as being able to visualize where the patch came from on the cortical surface, being able to overlay it with color codes for medial/lateral and rostral/caudal, etc. To do this, while viewing the flat patch in mrLoadRet, go to Plots -> Flat Viewer
Another thing you can do in the Flat Viewer is double-check the segmentation by looking at the 3D anatomy.
====== Caret ======
===== Purpose =====
The most common way to average data across subjects would be to transform all subject's functional data to a standard space (e.g., Talairach space). An alternative (and better) way to average would be to use surface-based registration, which aligns different subject's anatomy according to major landmarks of the cerebral cortex. There is a way to do this in MLR using Caret that allows users to average certain type of data (overlays, ROIs) across subjects after spherical registration. This page documents the procedures of this process.
===== What you need =====
* FreeSurfer software [[http://surfer.nmr.mgh.harvard.edu/fswiki|link]]
* Caret software [[http://brainvis.wustl.edu/wiki/index.php/Caret:About|link]]
* Download Caret's Freesurfer_to_PALS-B12 Pipeline Distribution [[http://brainvis.wustl.edu/wiki/index.php/Caret:Download|here]] (scroll down to the bottom of page)
* Basic familiarity with the Caret GUI interface and file types (go through some tutorials).
Here are the major steps:
- Use FreeSurfer to do segmentation and surface reconstruction
- Run the preborder script to import FreeSurfer surface to Caret and auto-landmark
- Manually fix the auto-landmarks to conform to guidelines
- Run the postborder script to do spherical registration
- Import the aligned surface to MLR
**//NB//**: The pipeline scripts were developed by Donna Dierker, Caret's main program analyst, in late 2009. The one tested at MSU (Taosheng Liu's lab) are beta versions of these scripts. This document was written based on these beta scripts but they final version should work too (and possibly better). If not feel free to email me (Taosheng Liu) or the Caret email list (Donna is terrific in helping others).
===== Step 1: Surface generation =====
We use FreeSurfer for segmentation/surface reconstruction from volume anatomy (MP-RAGE) data. One can also use Caret to do this. However, Caret requires more manual intervention (e.g., manual fixing of topological errors during surface optimization), while FreeSurfer can do everything pretty much automatically. This particular way to do things are endorsed by the Caret team, because of the convenience of FreeSurfer surface engine. It is entirely possible to do everything in Caret. Consult the Caret tutorials on how to segment/surface recon in Caret. To generate surface from volume data using FreeSurfer, run the recon-all command:
$recon-all -subject FS015 -i S01520091009+13-mprage_1mm_iso_20.83_bw.hdr -all
Come back in ~24 hrs, you will find a directory named ''FS015'' that contains all FreeSurfer files. This directory will reside in ''fmri/anat/'' (the environmental variable specified by ''SUBJECTS_DIR'', which is set up by FreeSurfer).
===== Step 2: Run preborder script =====
The preborder script does the following things:
- Import FreeSurfer surface to Caret. The FreeSurfer surface in the native space is transformed first to the MNI space (kind of Talairach transform, although in this case auto-Talairch, i.e., no need to define AP,PC, ... points), then from MNI to 711-2B, which is Caret's atlas space. Quite cumbersome, I know, but there're just too many standards in this field :-(.
- Resample the surface to the 73730 nodes. Turns out FreeSurfer generates surfaces with many more nodes than Caret. All Caret surfaces have 73730 nodes per hemisphere and all spherical registration parameters are optimized for this node density, so it is necessary to downsample.
- Automatically draw landmarks for spherical registration. This is really cool, Caret can guess (largely correct) the location of the six landmarks. So all you need to do is to tweak them later, instead of drawing from scratch.
- Generate a html report so you can easily inspect the result.
**Tip:** It is hard to debug shell scripts. Two things to know: there's a debug flag at the beginning of the script ''set -x''. It prints out the actual command being, useful to know where you are in the script. Once the script runs smoothly, you can turn it off (by adding ''#'' in front). Also, watch out for ''error'' messages when running, and stop the script (''ctrl-c'') as soon as you spot one (otherwise it keeps running and prints out many error messages). You need to take care the first error message, then repeat, etc.
==== Customize preborder.sh ====
Use a text editor to edit the script to customize to your particular setup/needs. All site specific information is at the top of the file.
=== Specify the input (FreeSurfer) directory ===
Near the top of the file:
DIRLIST=/tmp/dirlist.$$.txt
cat > $DIRLIST <