pRF Splitter

This wiki page contains information on using the “splitter” functionality to run a pRF analysis. In order to access this updated version of the pRF Code, you must type `mlrPlugin` into Matlab and uncheck the pRF box and check the pRF(v2) box.

Setup Your Sherlock Account

  1. First make sure you have an account on Sherlock. This may require asking Justin to email research-computing-support@stanford.edu specifying your name and SUNetID to get an account.
  2. Then, open Terminal and run the following command, which will add these three lines of code to your ~/.ssh/config file.
       echo "Host sherlock sherlock.stanford.edu sherlock* sherlock*.stanford.edu
         GSSAPIDelegateCredentials yes
         GSSAPIAuthentication yes" >> ~/.ssh/config

- Also run the following command, which will make it so that you don't have to enter a password and do 2-factor authentication every time you log onto Sherlock (which can be super annoying).

       echo "Host login.sherlock.stanford.edu
         ControlMaster auto
         ControlPersist yes
         ControlPath ~/.ssh/%l%r@%h:%p" >> ~/.ssh/config

- You should now be able to login just by typing “ssh <yourSUID>@login.sherlock.stanford.edu”

Running pRF with splitData option enabled

  1. Using the mrLoadRet GUI, click Analysis > pRF Analysis, which should open a dialog box saying “Set pRF parameters”.
  2. Enter the approximate desired amount of time per split in minutes into the splitTime field. Based on this, the program will calculate how many voxels to include in each split.
  3. Set the rest of the parameters as needed, and hit OK.
  4. In the next dialog box, choose the scans you want to run this pRF analysis on and hit OK. Note: for right now, you can only select one scan to run this split analysis on.
  • This will automatically take care of the following:
    • Transferring your current session directory and current group directory to Sherlock.
    • Splitting the data up into a number of splits determined by the time specified in splitTime. These splits are saved in a new folder that's created in your session directory called Splits/
      • Generating a master struct that tracks each of the split structures and stores information about the Views and other stuff.
    • This calls a controller function, which queues up each of the splits and sends it out to your local machine, Sherlock, and any other computers/servers you may have specified.
      • If using Sherlock, this also transfers these split chunks to Sherlock in the Splits/ folder of the corresponding remote session directory.
      • If using Sherlock, this generates batch scripts using the SLURM submission system to submit jobs for the Sherlock queue.
  • Miscellaneous Info:
    • The data on Sherlock is stored in /share/PI/jlg/data/ –> this folder will contain each of the projects (e.g. mglretinotopy) and each of the sessions (e.g. s036020170331) within those projects.
    • The log files of each job will be stored in /share/PI/jlg/log/
    • You can check the status of all your jobs by logging onto Sherlock and typing “squeue -u $USER”

Overview of Controller Pipeline

  1. Upon selecting options and running the pRF analysis through the GUI, the analysis first runs the Prefits.
  2. Then, the program runs the pRF Fit on a small number of voxels to calculate the number of voxels whose fits can be computed within splitTime minutes.
  3. First, pRFSplit is called, which splits the data into chunks and saves each into a file in the Splits folder within the session dir. This also takes care of:
    1. cleaning up the previous Splits folder.
    2. saving a master split struct that tracks information about the fits, prefits, scanCoords, overlays, numSplits, etc.
  4. Then, pRFController is called, which queues up each of the splits and assigns it out to whichever jobs are available. This is constantly checking for when jobs are completed, and pulls the completed analysis, and then adds a new job to the open compute slots. When all jobs are completed running, pRFController exits.
    1. You can specify a new computer/server by creating 3 functions, e.g. setupSherlock, addSherlockJob, and checkSherlockJob.
      1. setupXXX() function allows you to specify the number of “slots”, i.e. the number of jobs you will allow to concurrently run. For setupLocal, this is merely the number of available workers on your local machine.
  5. Finally, when all jobs are completed, pRFMergeSplits is called, which pulls together all the analyses according to the parameters saved in the master struct. This then installs the analysis and saves it to the currently opened MLR view.

Adding a new computer/server

The code currently contains functionality to run jobs concurrently on your local machine as well as Sherlock.

To add a new computer/server (e.g. Corn) to this pipeline, you must do the following:

  1. Create a function: setupXXX(~) (e.g. setupCorn(~)), which transfers your session directory, along with the Splits folder to some location on that Server.
  2. Create a function: addXXXJob(split), which runs the job on the server. For a SLURM server such as Sherlock, this will involve generating a batch script and submitting it; for a local machine, this merely involves opening an asynchronous Matlab instance and calling pRFRunSplits(controller.pRFName, split.num) on the job.
  3. Create a function called checkXXXJob(split), which checks if the analysis file has been generated and pulls it to your local machine in the Splits/Analysis folder (if on a remote server).
  4. In pRFController.m:
    1. Specify a variable /controller.XXXSessionPath/ and set it to the location of the session directory on that server.
    2. Add a field to the prf struct for your new server, with 3 fields each pointing to the three functions you just created, in the following format: prf.XXX.setup = @setupXXX; prf.XXX.add = @addXXXJob; prf.XXX.check = @checkXXXJob;

Follow the examples in the corresponding scripts for sherlock and local if needed.

Screenshots