Data processing, analysis and visualization with mrTools

This tutorial will guide you through data processing, analysis and visualization of your MRI data using mrTools.

Learning goals

After completing the tutorial you should be able to use mrTools to…

…examine raw data (epi and T1 weighted anatomies)
…determine whether motion-compensation worked
…examine the alignment between different data sets
…examine gray-matter / white-matter segmentations made by FreeSurfer and view surfaces
…find the calcarine sulcus and make flat patches of visual cortex
…average data and run pRF analyses
…determine the location of V1
…concatenate data and run an event-related analysis

Computer setup

You will need to download the git repositories for mrTools and mgl and make sure they are in your Matlab path. Do the following (you should make sure to specify the full pathname to mrTools and mgl in the commands below if they are not in your current directory)

addpath(genpath('mrTools'));
addpath(genpath('mgl'));

If you are running the latest Matlab / MacOS version there is some possibility you will run into problems with compiled files (.mexmaci64) not being recognized and causing problems. For this tutorial, we do not need to use most of these files, so it should not be a big problem. You can check to see if you have a probelm by running the following:

>> mydisp('test')
test>> 

If instead of printing out test (without a newline) it crashes, then you have two choices. Easiest is to just remove the precompiled file:

>> delete(which('mydisp'))
>> which mydisp
/Users/justin/proj/mgl/utils/mydisp.m

That will remove the compiled file and allow the m file to run instead which should work on any system.

Your other option is to recompile mydisp.c. You will need to have XCode, Apple's developers tools which has various compilers and other tools, installed and then you can do the following:

mglMake myDisp.c
mlrMake

Examine raw data

No matter what kind of experiment you do - you always want to look at the data you collect in as raw a format as you can. With neuroimaging this is even more important, because often data are processed with complex multi-step analyses and you can lose track of what features of your data give a result and get into real trouble. Lukily with imaging data, you can just look at your data, cause, well, it's a bunch of images. So let's do that first!

First, start in matlab by cd'ing to your data directory and run the program mrLoadRet. It will bring up a viewer. First thing to note is the “Group” drop-down at the top-left. We store data in different “groups” (folders on the harddrive) depending on what stage of analysis the data is in. We want to look at the “Raw” group - so:

Step 1: Select the “Raw” group

In MRI measurements there are different images made with different contrasts (for example, T1 or T2 or T2*-weighted images). For this purpose, just know that a typical anatomical image that you might see is “T1-weighted” where gray matter is gray, white matter is white and cerebral spinal fluid is black. You can take a look at that image by loading it as a “Base Anatomy”. A note here is that we can display summary statistics and data on many different kinds of base anatomies (we will see surfaces and flat-maps in a second). When we do so, we are just doing this for visualization - we transform the statistics on the fly to display on these base anatomies - we always keep the raw data in its original format (thereby avoiding messing with the original data too much by resampling and interpolation only when it is necessary to visualize). We always collect a T1 weighted anatomical image every session - this is used for visualization and for alignment with a canonical anatomy and surfaces (as we will see below).

Step 2: Load the T1 anatomy Go to the menu item: File/Base Anatomy/Load and select the T1 weighted anatomy. Should be called something like “anat01_11954_2_1_T1w_9mm_sag.nii”

You should see the name of the anatomy appear in the Base drop-down and a picture of the brain appear. A few fun things to do here:

Step 3: Examine T1 anatomy Click on the “Multi” radio button. You should see 4 different views of the brain. If you click (make sure that Plots/Interrogate Overlay is unchecked) in any of the views (except the bottom-right one), it will update the other slices to show you that location in the brain. Explore!

Step 4: 3D anatomy view Another view that is sort of fun is to select the 3D radio button and move the 3D slice view around. Not sure that you learn anything different, but it looks kinda cool.

Ok. Cool. So, what about the “functional” data? Well, those are collected at a much faster-rate, typically with a fast imaging sequence known as EPI (Echo Planar Imaging), and have a T2*-weighted contrast. These are sensitive to the paramagnetism of deoxygenated hemoglobin (they get darker where there is more deoxygenated hemoglobin), let's take a look at these.

Step 5: Look at raw EPI images Go to Plots/Display EPI Images. You should see a viewer, where you can look at individual images or slices. Look at individual images. Click displayAllSlices and Animate over frames. This will show you a movie of all the images of the brain.

Note the different image contrast. What color is gray-matter? What color is white matter? How good is the resolution compared to the T1 we just looked at? Do you see evidence of head-motion? Do you see evidence of BOLD contrast (consider that large changes in BOLD are about a couple of % - do you think you could see them)?

Check motion compensation

We typically run an algorithm that tries to use image-based registration (shifting and rotating the image of the brain to make all frames match as best as possible) to compensate for any subject head-motion. This usually helps, but sometimes it can mess things up if it doesn't work properly. So we like to check to see what it did. This should already have been run on your data (it's available from the Analysis menu, if you're interested), so let's just go take a look at it.

  1. Change to the MotionComp group That's the drop-down at the top-left.
  2. Examine the EPI images Use the same procedure as you did above when looking at the raw data. Do you see differences between the videos? Did the MotionComp do anything?
  3. Examine the motion correction matrices These show you how much the motion compensation algorithm had to move images.
    Does it show lots of movement? Do it on different scans. Are there any scans that might have so much head motion that you'd be suspicious of results?

Surfaces and flat-maps

It is often helpful to examine our data on renderings of the cortical surface. This is done by processing a high-resolution T1 image of the brain to find the borders between the gray and white matter and the surface of the brain. Then you can segment out just the cortical surface. This is useful since you can throw out data from outside the brain, or within the white matter and visualize on the cortical topography. FreeSurfer is a useful set of tools that can do this automatically on a T1 image (you type one command and then come back in 24 hours or so and it has segmented the brain and made triangulated surfaces. Check here if you wish to know more about surfaces and flat maps. In this section we'll take a look at surfaces and make some flat maps for visualization.

Alignment

First, we need an alignment. Note that we took the T1 image of the brain (which we will call the “canonical anatomy”) on a different day. Since you never put your head back into exactly the same place in the magnet, we need to find the transformation between where your head was on the day that we collected the canonical and the day of the experimental session. To do that, we use the T1 weighted image we looked at before, which we call the “session anatomy”. For more about this process see here.

We are going to use a tool called mrAlign to make an alignment to align the canonical and the session anatomy.

  1. Remember subject ID Take note of the first four numbers in the mrLoadRet window title, s####. That is your subjectID.
  2. Quit mrTools File/Quit
  3. Run mrAlign From the Matlab command window type: mrAlign
  4. Load the canonical Go to File/Load Destination (volume). Navigate to ~/data/mlrAnatDB/s####/s####.nii and load.
  5. Load the session anatomy Go to File/Load Source (inplanes). Navigate to your data directory Anatomy/anat01_#####_2_1_T1w_9mm_sag.nii and load.
  6. Examine alignment Move the slice slider at bottom to a slice in the center of the brain and adjust the alpha/overlay button until you can see both brains. Does the alignment look good? You can fiddle with the Translation and Rotation buttons to mess up the alignment and see what it looks like if it is not aligned.

You can skip the following steps if the alignment has already been run. But if it hasn't, here's what you need to do:

  1. Run alignment If the alignment has not already been run (i.e. it looked bad above), then run the alignment using Compute Alignment/Compute Alignment. This should take a minute or two (check the matlab command window for progress).
  2. Save alignment If you have readjusted the alignment, then you are going to want to save that to the session anatomy, by doing File/Save alignment
  3. Align EPI images The above only saves the alignment into the session anatomy. You now need to find the right alignment for the EPI images (they may have be taken with a different slice prescription then the session anatomy). So, to check that, you do:
    1. Load session anatomy as destination You are now going to align the EPIs to the session anatomy, so to do that the session anatomy has to be destination. Go to File/Reload source as destination
    2. Load EPI images Load any of the EPI images by going to File/Load source (inplane) and select any image from Raw/TSeries/*.nii. Note that this is not going to be an image based registration, but a coordinate based registration (see here for details). When the dialog comes up saying “Choose which frame of 4D file…” Select 0 and click OK.
    3. Check alignment Same as above for the T1 - won't look as pretty as for the T1 (see the EPIs are lower-resolution and contain some distortions, but should look reasonable).
    4. Export alignment Export the alignment to your session. This will update all of the epi images and related information as to its orientation image. Go to File/Export/Export to mrLoadRet-4.5 and say yes to save Alignment. A dialog box will come up with all the groups. Make sure all groups are selected (to send the alignment to all the groups) and click ok. You will get a dialog box that asks you if you want to remove base anatomies. The base anatomies that are loaded will have the wrong alignment information, so best to remove all the bases here (click All and then ok). When you get back into mrTools you can reload the base anatomies from the File/Base Anatomy/Load menu.

Import surfaces

Ok. Now we are aligned to the canonical anatomy, we are ready to import a surface and take a look at it.

  1. Go back to mrLoadRet Exit from mrAling if you haven't already (File/Quit) and run mrLoadRet (type mrLoadRet in command window).
  2. Import surface Go to File/Anat DB/Import Surface from Anat DB. A dialog box will come up asking you to set the subjectID. Subject ID should show the same number as you noted above. Click Ok in the dialog box.
  3. Choose the left inflated surface From the surfaces folder load s####_left_Inf.off
  4. Choose the canonical In the next window open the s####.nii T1 file you used before.
  5. Adjust surfaces Two windows should open, the “View surface” window allows you to set which surfaces you will view data on. There are two visualiation surfaces, outersurface and innerSurface. By having two surfaces, we can provide a slider later which will allow you to smoothly interpolate between the outer and inner surface for visualization. We will therefore use the inflated surface for the outer and the gray matter for the inner surface. The coords surfaces are really important. They determine where the data will come from that you visualize on these surfaces. You want to select the gray and white mater surfaces for these so that we can interpolate data from within the cortex. The dialog should look like the following when you are done (don't press OK yet!):
  6. Examine the segmentation Remember the segmentation should show you the outer and inner surfaces of cortex. Let's examine if the algorithm found it correctly. You should see the 3D anatomy in the other window with white and yellow dots. The white dots indicate the gray matter white matter boundary. The yellow dots indicate the pial surface, or the gray matter/CSF boundary. Scroll through the slices, do you see any errors? I.e. white matter that snuck in, gray matter that was left out, etc? Does the segmentation fail in any specific parts of the brain? How noisy does the gray matter and white matter look?
  7. Import and do other hemisphere If you are satisfied, click Ok to import the surface, and do the other hemisphere.

Make a flat patch

Often it is convenient to look at data on a flattened piece of cortex (makes it easier to view V1). Let's make flat patches around the calcarine sulcus.

  1. Rotate surface until you see the calcarine The calcarine should be easily visible on the medial surface of the occipital cortex. Make sure that Plots/Interrogate Overlay is not checked and rotate the brain around till you see the calcarine. You might also try moving the “cortical Depth” sliders on the top left. These will show you views from a fully inflated to the white matter surface.
  2. Select the makeFlat interrogator Now, turn on interrogators (these are functions that get run when you click on parts of the brain). Go to Plots/Interrogate Overlay and select. In the bottom right, you should see an Interrogator drop-down. Select makeFlat.
  3. Click on the calcarine Now click on the calcarine near the occipital pole. This will set where you want to center your patch. You can play around with the radius until you get a patch that covers the whole calcarine (about 60 usually works)
  4. Run the flattening algorithm Click Ok. In the next dialog box use default options (flatRes=2, thershold checked and mrFlatMesh) and click ok. Check the command window, it will show you some updates as it is chugging along. You should now have a flat patch in the viewer
  5. Which way is up Most confusing thing about these patches, is which way is up? To figure this out
    1. Make sure session anatomy is loaded Peak in the Base drop-down and make sure you have the session anatomy anat… If you don't load it, and switch back to your flat map.
    2. Select searchForVoxel interrogator In the interrogator drop down at bottom left.
    3. Start search Click on any location in the flat patch. A dialog box should come up. Make sure that baseName is set to your session anatomy and that continuousMode is checked.
    4. Find the top of the patch move your mouse (don't click) over the patch, you should see a yellow dot move around in the session anatomy noting the location where that part of the brain comes from.
    5. Click red close button Note that if you use ok, you will need to then wait a second - then you will need to set Base back to the flat patch (the searchForVoxel interrogator takes you to the location you clicked in the other anatomy.
    6. Rotate to top Now use the rotate slider at the left to rotate the patch until the location that you determined was at the top of the brain is at the top of the viewer.

pRF analysis and V1 localization

Ok. Now we're ready to look at the pRF analysis.

  1. Averages Typically we average together a few scans where we showed the exact same set of bars to improve our SNR. This should already but done, so just switch to the Averages group (if it hasn't been done, just go to Analysis/Average Time Series and do it making sure that only the first 4 pRF scans have the include in average checked and click ok).
  2. pRF analysis The pRF analysis takes some time to run, so it should already be run for you. (If not, you would run it from the Analysis/pRF analysis menu).
  3. Examine r2 First thing to do is to see how much of the variance is accounted for. In the Overlays box at left, click on r2 to view the r2 overlay.
    Note that this example is from our regular pRF scans in which we run 8 runs and average which will give better SNR then the 4 runs that you ran.
  4. Examine good voxel fits Making sure that the Interrogator is set to “pRFPlot” click on points in the flat map with high r2 (whitish). Does the fit (red) match the data (black)? If you move your mouse over the time series you should be able to see the stimulus and the receptive field at bottom that generated the model predictions. Makes sense?
  5. Examine poor voxel fits Click on a few voxels with low r2. What happened there? Why are the fits bad? Do they have visual responses?
  6. Select r2 cutoff voxels with poor r2 are just basically noise, so you may want to not view them in the overlays if it is easier. Set the Min slider at bottom left, making sure that “Clipping Overlays” is set to r2 to a value that gets rid of voxels with poor r2 (this can be pretty arbitrary for this purpose - if you really wanted to you could base clipping on some statistical test of goodness-of-fit.
  7. Examine eccentricity Chose “eccentricity” from the Overlays and examine that. Click on voxels to see if they match what is being displayed.
  8. Examine polar angle Choose “polarAngle” from the Overlays and examine that. This is how we are going to determine where V1 is. Remember that V1 is a full hemifield which is flipped on the cortex (goes from upper to lower visual meridian as you go from the bottom to the top of the calcarine sulcus). When you get to V2, the direction reverses and this is how you know where V1 is. In practice, you are going to look for colored stripes in the data where the vertical meridians are and draw an ROI around there for V1. You should look for two of the same color lower visual meridians (V1/V2 and V2/V3 borders, green in example below) in the top and two of the same color upper visual meridians in the bottom (magenta in the example below).
    Click a few voxels on these meridians to determine if the visual RFs are where you think they should be. Note that things may not look as pretty as in the above example, just because we collected less data. Ask if you can't figure out where V1 is!
  9. Draw V1 region of interest Turn off Plot/Interrogate Overlay. Go to ROI/Create/Polygon set the name to left/right V1 and click OK. Then on the flat map, draw your ROI by clicking. DOuble-click to finish the ROI.
    Make sure that Selected ROIs (Perimeter) is selected in the drop down at left for Display Mode and you should see your V1 ROI
  10. Draw V1 in other hemisphere Do the same for V1 in the other hemisphere.

Event-Related analysis

Now we will run an event-related analysis on your paradigm to see what you got!

  1. Concatenate scans together You will need to concatenate all the scans that you collected of your paradigm together. This should already be done. So just go to The Concatenation group. If not, you can run Analysis/Concatenate Time Series. Note that this high-pass filters the data at 0.01 Hz to remove low-frequency drift.
  2. Run evented-related analysis This should also already be run, but if not, you can go to Analysis/Event-related Analysis. Click Ok at the first dialog box and the second dialog box.On Set parameters dialog, you can choose the length of the hdr you want to calculate (the default should be fine). Also the name of the variable that you will use to run the deconvolution on (the variable may specify your attention or working memory conditions as different trial types and will be dependent on your particular paradigm). For now, let's just act as if all trials are the same and set varname to _all_
    Click Ok. This may take a few minutes to run - check the command window for progress.
  3. Examine r2 You should see an r2 map, just like you did for the pRF. How much of the variance is being accounted for here?
  4. Select a high r2 voxel Click on one of the high r2 voxels (Make sure Plots/Interrogate Overlay) is on and the interrogator is set to eventRelatedPlot.

Next steps

  • You might want to run an event-related analysis where you split out different conditions depending on your paradigm (ask one of us how to do that).
  • Look in both hemispheres (switch to the other flat map).
  • Look in different parts of the brain (switch the base back to the surfaces and look there).
  • Look at the average response in V1 (click in the V1 ROI on event-related).
  • Draw other ROIs and look at average responses.
  • Run a classification analysis. To get the data see below
  • Compute a contrast and display on the brain below.
  • Examine your behavioral data. See below.

Make sure to save some figures (easiest is to do shift-command-4 then space and click on the viewer). These can be used for your presentation!

How to

You may want to get the data from the eventRelatedPlot so that you can do your own analysis. For example, replot the time series. Or, to run classification analysis using the region of interest you have define, like V1. To do that, click the “Save to Desktop” button in the eventRelated plot figure:

After you do that, you should have in your workspace the variable eventRelated:

(eventRelatedPlot) Saving data from figure to variable eventRelated
>> eventRelated
 
eventRelated = 
 
          d: [1x1 struct]
      voxel: [1x1 struct]
    stimvol: {1x3 cell}
        roi: [1x1 struct]

This has data from both the voxel and the region of interest (if you had clicked on an ROI). The fields in the data structure should be pretty straight-forward. For the voxel it looks like this:

"matlab"
>> eventRelated.voxel

ans = 

              vox: [61 64 15]
             time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75]
             ehdr: [3x17 single]
          ehdrste: [3x17 single]
               r2: 0.3391862
          tSeries: [760x1 double]
    tSeriesByStim: {[19x17 double]  [17x17 double]  [18x17 double]}

Vox is where the voxel is in the data, ehdr contains the estimated hemodynamic response function for each condition (in this case 3 conditions) - i.e. your event-related analysis output for this voxel. ehdrste contains the standard error of those estimates computed over repetitions of the stimulus. r2 is the variance accounted for of the fit. tSeries contains the tSeries where this was computed from. tSeriesByStim is that same time series sorted by condition (i.e. in this case there were 3 conditions, with 19, 17 and 18 repetitions each of these 3 conditions. The tSeriesByStim has the time series broken up so that it contains the response from the start of each trial out for 17 volumes - the length of the computed event-related hdr). Note that these sorted tSeries do not deal with response overlaps - it's just the raw time series after each event. It also will contain nans in the time series for events which don't have complete data (i.e. the scan ended before the response was finished).

If you have a region of interest, it will contain similar fields, looking like this:

>> eventRelated.roi
 
ans = 
 
               scm: [760x51 double]
            hdrlen: 17
              nhdr: 3
               dim: [2 2 2 760]
           volumes: [1x760 double]
              ehdr: [3x17 double]
           ehdrste: [3x17 double]
                r2: 0.60908429155102
              time: [0.75 2.25 3.75 5.25 6.75 8.25 9.75 11.25 12.75 14.25 15.75 17.25 18.75 20.25 21.75 23.25 24.75]
             covar: [51x51 double]
           roiName: 'lV4'
               roi: [1x1 struct]
    meanTimecourse: [1x760 double]

If you want the time series, it will be in the field roi:

>> eventRelated.roi.roi
 
ans = 
 
             branchNum: []
                 color: 'black'
                coords: [4x636 double]
             createdBy: ''
    createdFromSession: 's037120180521'
         createdOnBase: 's0371_left_occipital'
                  date: '23-May-2018 16:26:13'
         displayOnBase: 's0371_left_occipital'
                  name: 'lV4'
                 notes: ''
             sformCode: 1
             subjectID: []
              viewType: 'Volume'
               vol2mag: [4x4 double]
               vol2tal: []
             voxelSize: [1 1 1]
                 xform: [4x4 double]
            scanCoords: [3x46 double]
               scanNum: 2
              groupNum: 4
              scan2roi: [4x4 double]
                     n: 46
               tSeries: [46x760 double]
         tSeriesByStim: {[19x46x17 double]  [17x46x17 double]  [18x46x17 double]}

Same format as the voxel, except you will see that tSeriesByStim has dimensions numRepeats x numVoxels x numTimePoints - where in this case the roi had 46 voxels.

Get behavioral data

If you want to analyze the behavioral data, you can get this from the workspace for the scan the mrLoadRet viewer is currently displaying (i.e. go to the group, scan in the GUI that you want the data for) and do the following:

>> stimfile = viewGet(getMLRView,'stimfile')
 
stimfile = 
 
    [1x1 struct]    [1x1 struct]    [1x1 struct]

viewGet is a function that can get various things about the scan you are viewing (type viewGet without any arguments to see all the things it can get), getMLRView accesses the current view being displayed and stimfile are the files that we save about your experiment. Note that it returned an array of 3 of these for this particular experiment since there are 3 scans that were concatenated together, each having its own stimulus file. You can convert these into a format that is easier to parse by running getTaskParameters:

>> e = getTaskParameters(stimfile{1})
 
e = 
 
                  nTrials: 20
              trialVolume: [1 14 26 39 53 69 85 100 113 129 145 160 175 191 206 220 235 251 267 NaN]
                trialTime: [1x20 double]
             trialTicknum: [1909 3078 4157 5326 6585 8024 9463 10812 11981 13420 14860 16210 17559 18998 20348 21607 22956 24392 25832 31319]
                   trials: [1x20 struct]
                 blockNum: [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20]
            blockTrialNum: [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
                 response: [1 1 1 1 2 1 1 1 1 2 2 1 1 1 2 2 1 1 1 NaN]
             reactionTime: [1x20 double]
    originalTaskParameter: [1x1 struct]
         originalRandVars: [1x1 struct]
           responseVolume: [10 23 35 48 62 78 94 109 122 138 154 169 184 200 215 229 244 260 276 NaN]
          responseTimeRaw: [1x20 double]
                 randVars: [1x1 struct]
                parameter: [1x1 struct]
 
>> e.parameter
 
ans = 
 
    numJitter: [2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
 
>> e.randVars
 
ans = 
 
                match: [1 1 1 1 0 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0]
          orientation: {1x20 cell}
    orientationJitter: {1x20 cell}
      baseOrientation: {1x20 cell}

Note that variables that were used in your experiment will show up in either e.parameter or e.randVars depending on how they were coded. You can see the subject's response (which button was pressed) and what variables were set to as a function of trial. A nan would indicate no response. reactionTimes are in seconds relative to the beginning of the trial segment from which the subject could respond. Remember that if you want to compute something like percent correct, you probably want to do this by accumulating the data across all the scans (i.e. do a for loop over each stim file). To interpret the particular parameters and what constitutes a correct answer or not, consult with us about how your particular experiment was coded.

Displaying contrasts between conditions

You may want to display “contrasts” between conditions - i.e. color the brain according to which condition gave a larger response. One way to do that is to run a GLM and choose particular contrasts to display. It's point and click, but a bit involved because there are a lot of relevant settings. See here.

A simpler way is to compute something yourself and try to overlay it using the function mrDispOverlay. Here's a quick example, which takes your event related analysis and computes the difference in response for two conditions and displays that.

  1. Get event-related data Follow steps above to get the eventRelated structure.
  2. Compute difference for every voxel
    % First get the estimated responses for all voxels out of the structure
    ehdr = eventRelated.d.ehdr;
    size(ehdr)
     
    ans =
     
        92    92    54     3    17

    The array is xDim x yDim x nSlices x nConditions x hdrlen in size (in this case a 92x92x54 epi image and 3 conditions computed for 17 time points). Let's just take the difference between the 1st and 2nd condition.

    diff = nanmean(ehdr(:,:,:,1,:)-ehdr(:,:,:,2,:),5);

    Note that I use nanmean since some timepoints will have nans in them if the motion comp has had to interpolate from outside the volume. Also, I'm just averaging across the whole response - you could try different things.

  3. Display in the current open viewer
    mrDispOverlay(diff,2,'Concatenation',getMLRView);

    Note that the 2 is for scan 2 and the group is 'Concatenation'. Once it is installed, it will show up in the viewer as the overlay mrDispOverlay.

  4. Set overlay properties You may wish to choose how to display it (i.e. what colormap and so forth) by going to the menu item Edit/Overlay/Edit Overlay and choosing parameters. In particular, it may be helpful to set the clip boundaries - i.e so that only values within a range affect the display. I'll set these for this example to -2 and 2:
    And that looks like the following for this example: