Table of Contents

mglCamera

There are a few routines in mgl now that use the Spinnaker API. You may need to compile the binaries to get the library linkage working (also, for now this all is working in Matlab2015a and Mac OS Mavericks. A recent version of Mac OS is probably necessary for the Spinnaker API. The old version of Matlab is just because I have not figured out how the replacement for the mexopts.sh file is so do not know how to get the right library linkage etc). After Downloading mgl you can test to see if things work by plugging in a FLIR camera via USB and running:

>> mglCameraInfo

And you should get output that looks like the following

Application build date: Oct 22 2019 09:47:08

Spinnaker library version: 1.26.0.31

Number of cameras detected: 1


Running example for camera 0...

*** PRINTING TRANSPORT LAYER DEVICE NODEMAP ***

Root
   Device Information
      DeviceID: 19340270
      Device Serial Number: 19340270
      Device Vendor Name  : FLIR
      Device Model Name: Blackfly S BFS-U3-16S2M
      Device Type: U3V
      Device Display Name: FLIR Blackfly S BFS-U3-16S2M
      Device Access Status: OpenReadWrite
      Device Version: 1707.0.125.0
      Device Driver Version: none : 0.0.0.0
      Device User ID: 
      Device Is Updater Mode: 0
      DeviceInstanceId: 01271BEE
      Device Location: 
      Device Current Speed: HighSpeed
      GUI XML Source: Device
      GUI XML Path: Input.xml
      GenICam XML Source: Device
      GenICam XML Path: 
      Device Is In U3V Protocol: 1

   Device Control
      Device Endianess Mechanism: Standard


*** PRINTING TL STREAM NODEMAP ***

Root
   Stream Information
      Stream ID  : 0
      Stream Type: U3V
      Total Buffer Count: 0

   Buffer Handling Control
      Manual Stream Buffer Count: 10
      Resulting Stream Buffer Count: 10
      Stream Buffer Count Max: 3963
      Stream Buffer Count Mode: Auto
      StreamDefaultBufferCount: 10
      StreamDefaultBufferCountMax: 3963
      StreamDefaultBufferCountMode: Auto
      Stream Buffer Handling Mode: OldestFirst
      CRC Check Enable: 0
      Stream Block Transfer Size: 0

   Stream Diagnostics
      Failed Buffer Count: 0
      Buffer Underrun Count: 0


Camera 0 example complete...


ans = 

    device: [1x1 struct]
    stream: [1x1 struct]

If this does not work, then you might try to recompile the code:

>> mglMake(1,'camera');

You can also test capture an image :

>> im = mglCameraCapture;
(mglPrivateCameraCapture) Grabbed image 1: (1440 x 1080)
>> imagesc(im');         
>> colormap(gray)     

You should get an image something like the following. You may want to first get the camera in focus using the SpinView_QT app which is part of the Spinnaker SDK in /Applications/Spinnaker/apps.

For running inside am mgl program, you will want to use the threaded version of the code, which interacts with the camera in a separate thread:

>> mglCameraThread init

You then set it to capture some images - default is to capture until 1 second after you run (see the help on mglCameraThread for settings)

>> mglCameraThread capture

Then you can get the images

>> im = mglCameraThread('get');

The images you get should have time stamps and other information:

>> im
 
im = 
 
               im: [1440x1080x27 uint8]
                t: [1x27 double]
    exposureTimes: [1x27 double]

You can continue to capture and get until you are done, after which you should shut the thread down:

>> mglCamera quit

Note that the time stamps (im.t) are based on the device clock that Spinnaker exposes through their ChunkData interface. This timestamp appears to be (as far as I can tell) ms precise but is not sync'd to the system clock. To sync it to the system clock, the mglCameraThread code is simply trying to compute the delay by querying the system clock and the device clock and checking the offset between the two and correcting (there is a significant lag). This is still insufficient by about 50-70 ms in the systems I have tested, so there is a manual calibration routine that can be used to find a more accurate delay and use that. To use it, point the camera at the screen and focus, then run:

mglCameraCalibTiming

This will display the system clock on the screen and we can synchronize the device and systems clock, by examining the returned images and inputing what time you see. You shoudl see something like this:

The interface allows you to either time in the number that you see, jump a frame forward if the image is not clear (as you will get intermediate frames) or skip the calibration point.

(mglCameraCalibTiming) What number do you see (hit ENTER if image is not clear or 'skip' to skip this calibration point): 2.5982
(mglCameraCalibTiming) Frame: 2.5999 happened at: 2.5982 (delay: 0.0017)

The calibration will not be set until the very end, and will confirm with you to do it:

============================================================
(mglCameraCalbTiming) By setting mglCameraDelay, the program mglCameraThread will correct for this amount of delay the next time you use mglCameraThread and can be overridden by using mglSetParam to set mglCameraDelay to zero. The setting is persistent to starting a new matlab session
============================================================
(mglCameraCalibTiming) Ok to reset mglCameraDelay form -73.7657ms to -76.2614ms of delay (y/n)? n

The parameter that is being set is called mglCameraDelay and you can get it and set it manually if you want:

>> cameraDelay = mglGetParam('mglCameraDelay')

cameraDelay =

       -0.0737656921877654
>> mglSetParam('cameraDelay',cameraDelay,1);

Note that the setting is persistent between matlab sessions. This seems to work reasonably well and probably gives accuracy down to several ms. I assume that even though the frame buffer is being update every 16.7ms (i.e. 60hz) that by averaging you are getting slightly better time resolution. If timing is critical, then we should probably implement a digital line sync to the camera through the National Instruments Dig IO stuff.

Spinnaker API

You will need the Spinnaker SDK. This works on MacOS but you need to follow directions in the README file that comes with it that tells you how to install the libraries it uses with homebrew. If everything works properly, you should be able to go to your Applications/Spinnaker/apps folder and run SpinView_QT. If you get the error “SpinVIew_QT cannot be opened because of a problem.”

then it is most likely because you have not installed the homebrew libraries.

Test working memory

I ran a test using the paradigm from Harrison and Tong (2009) Nature 458: 632–635. The precise timing can be found in the file wmface.m which is in the grustim repository

git clone https://github.com/justingardner/grustim grustim

Basically the segments are as follows:

Segment number Duration Description
1 200ms First orientation stimulus
2 400ms Interstimulus interval (gray screen)
3 200ms Second orientation stimulus
4 400ms Interstimulus interval (gray screen)
5 800ms Numeric cue (1 or 2 for which stimulus to remember)
6 11s Delay interval
7 500ms Comparison stimulus
8 2.5s Response interval
9 2.5s House keeping - retrieve camera images
10 1s House keeping

The file it saves is a standard mgl task structure that has been converted with getTaskParameters and has added on to it a structure with the camera images. It will be saved in a matlab file with session-id as follows:

SID_YYYYMMDD_HHMMDD.mat

>> load('TEST_20191029_101045.mat');    
>> e

e = 

                  nTrials: 3
              trialVolume: [1 1 1]
                trialTime: [0 19.002511224011 38.0194256540271]
             trialTicknum: [317 1443 2514]
                   trials: [1x3 struct]
                 blockNum: [1 2 3]
            blockTrialNum: [1 1 1]
                 response: [NaN NaN NaN]
             reactionTime: [NaN NaN NaN]
    originalTaskParameter: [1x1 struct]
         originalRandVars: [1x1 struct]
           responseVolume: [NaN NaN NaN]
          responseTimeRaw: [NaN NaN NaN]
                 randVars: [1x1 struct]
                parameter: [1x1 struct]
                   camera: [1x3 struct]
                 exptName: 'TEST_20191029_101045'
                        s: [1x1 struct]

There are 3 trials in the example, the relevant fields are e.response and e.randVars which has the following information (per trial):

>> e.response

ans =

     1     2     1     2     1     2     1     1     1     1

>> e.randVars

ans = 

                          cue: [2 2 2 2 2 1 2 1 2 2]
             orientationOrder: [2 2 1 2 1 1 1 2 1 1]
    clockwiseCounterclockwise: [-1 -1 -1 -1 1 -1 1 1 1 1]
            orientationJitter: {[0 0]  [0 0]  [-2 -1]  [1 -2]  [1 0]  [1 0]  [3 0]  [2 -2]  [-2 1]  [0 -1]}
                  orientation: {[115 25]  [115 25]  [25 115]  [115 25]  [25 115]  [25 115]  [25 115]  [115 25]  [25 115]  [25 115]}
         orientationThreshold: [3 6 3 6 6 3 6 3 6 3]

These values may be Nan if there was no subject response. Note that e.response is simply which button was pressed by the subject (button 1 or 2) and whether that is correct or not is determined by whether the correct answer is clockwise or counter clockwise (which is coded as -1 and 1). So, to compute correct, you should do the following

 e.correct = (e.response == (e.randVars.clockwiseCounterclockwise+2))

The camera info is in e.camera and you can find out which segment they were recorded in in e.camera.seg and the time relative to the beginning of the trial in e.camera.t. The exposureTime field is what the camera reports for how long the exposures are for. e.camera.im contains the images:

>> e.camera(1)

ans = 

          nImages: 425
              seg: [1x425 double]
                t: [1x425 double]
    exposureTimes: [1x425 double]
         filepath: '/Volumes/GoogleDrive/My Drive/docs/2019/shaul'
         filename: '[TEST_20191029_101045]-[0001].mj2'

The filename is the video file that was saved for that particular trial. It is in the format [session-id]-[trial-sequence-number]. These images were taken of the screen while the experiment was running and can be viewed in matlab using mlrVol:

v = VideoReader(e.camera(1).filename);
frameNum = 1;
while (v.hasFrame)
  im(:,:,frameNum) = v.readFrame;
  frameNum = frameNum+1;
end
mlrVol(im)

Shows that the stimulus is appearing at the correct images.