The Suthana laboratory provides a novel platform that allows for wireless and programmable recording and stimulation of deep brain activity in freely moving human participants. The lab has integrated the platform with external biometrics (e.g., heart rate, skin conductance, respiration, eye tracking, and scalp EEG) and state of the art virtual (and augmented) reality (VR/AR) technology. This first-of-its kind research platform for programmable deep brain recording and stimulation, external biometrics and VR/AR allows for a naturalistic and ecologically valid environment for elucidating the neural mechanisms underlying freely moving human behaviors and developing and testing viable therapies for patients with neurologic and psychiatric disorders.
Further details
The Suthana laboratory has a state-of-the-art motion capture space for mobile brain imaging housed in a 400 square foot room in suite 48-136 of the Jane and Terry Semel Institute for Neuroscience and Human Behavior. This space is equipped with 24 wall mounted high-resolution cameras (Optitrack, Natural Point, Inc.) for sub-millimieter motion capture and body positioning. This includes 18 Prime 13W cameras each with a 3.5 mm stock lens (82° horizontal FOV and 70° vertical FOV, 850nm band-pass filter, 800 nM (infrared) or 700 nm (Visible) filter switcher), 1280 x 1024 resolution with a frame rate of 30-240 FPS, 4.2 ms latency, and shutter speed of up to 3.9 msec. For facial emotion capture there are an additional 6 Prime 13 cameras each with a 5.5 mm stock lens (56° horizontal FOV and 46° vertical FOV, 850nm band-pass filter, 800 nM (infrared) or 700 nm (Visible) filter switcher), 1280 x 1024 resolution with a frame rate of 30-240 FPS, 4.2 ms latency, and shutter speed of up to 3.9 msec. Motion capture suits, reflective markers are used in conjunction with specialized software, Motive and Camera SDK (Natural Point, Inc.), that allows for kinematic labeling through development of subjects’ skeletal structure using sub-millimiter marker labels, real time output, full real-time control of cameras, and syncing capability with behavioral paradigm and neural data.
The VR/AR environments are programmed using the Unity game engine and the C# language to implement customized immersive environments with controlled stimuli and functionalities. Using motion capture cameras, the participants’ locations are mirrored real-time in the VR/AR application. VR/AR headsets are all equipped with eye tracking and include the SMI Samsung gear, HTC Vive, HoloLens and Magic Leap.
Heart rate, skin conductance, and respiration are recorded using the wireless BioNomadix Smart Center (Biopac). Data is streamed in real-time to the computer that is running the MoCap system for synchronization of multiple data streams.
Mobile Scalp EEG is recorded with the EEGOSPORTS WaveGuard 64-channel EEG and eego mylab system (ANT Neuro) for real-time recording and compatibility with DBS. The amplifier (weight =~500 g) and data acquisition tablet (VAIO™ Ultrabook®, Sony) are battery powered and are carried by the participant in a small backpack allowing for scalp EEG in real-time through access via WIFI. The eego mylab system has a sampling rate of up to 16 KHz capable of handling large DBS artifacts with a less than 10 msec recovery rate. Electrode positions can be digitized and registered to individual participant MRIs using Xensor software (ANT Neuro).