Categories
Events Health

Your Brain + Shattuck Research Group = Star Trek Holodeck

Dr Shattuck set up his VR environment in the DIVE at the Brain Mapping Center. The setup consists of four battlestations (powerful PC’s) equipped with Vive headsets and controllers, as well as two base stations. The setup is theoretically portable, and just requires an open space. From right to left: Dr David Shattuck, Dr Maja Manojlovic, Dr Jacob Rosen, Yang Shen, and Dr Ji Ma.

Dr David Shattuck’s VR experience has manifested the virtual reality classroom that we’ve all envisioned in one form or another. The work is the culmination of 20 years worth of research and coding, and presents brain imaging and analysis in a futuristic VR environment. As with other posts, I will do my best to explain Dr Shattuck’s work, but perhaps more than any other lab, this one is something that you just have to interact with for yourself. Words will, inevitably, fail to capture the full experience. Pictures will also fall flat, as I was only able to take pictures of the various PC monitors that displayed what users were viewing in the environment. (At the bottom you’ll find a video that does a better job at capturing a slice of the experience.)

The floating white headsets are other users that are sharing the experience. That big ole gray thing is a brain. The words are labels, and I’m told that the text explains the function of the part that the users selects. I’ve read the text over a couple of times and am still clueless.

The first thing you’ll notice about this environment is the jumble of text. A few things to note here: first, the labels can be toggled on and off. The text in the foreground is displayed only for the user who selected that particular section. The colored labels actually rotate to face the users as they walk around the brain. Lastly, and perhaps most impressive, the labels are automatically generated by software that Dr Shattuck has written. (Again, this is the product of 20 years of work. I almost said culmination, but I dare say that Dr Shattuck is not quite finished.)

Here Dr Shattuck combines two features: the brain with color coded sections along with a 2-dimensional gray MRI scan.

One of the features of the environment that was difficult to capture in a photograph was the way in which Dr Shattuck can manipulate that 2-dimensional MRI scan. With his controller, he is able to manipulate it in any direction, offering users a glimpse at a slice of the brain in any position. Impressively, Dr Shattuck told us that all of the necessary MRI data can be collected in under 30 minutes, and with powerful workstations this data can be used to generate all of the assets that we saw in the environment in 2-3 hours. And, of course, this timeframe will only become shorter as hardware improves.

Another view of the color-coded brain combined with the MRI scan image.

The quick turnaround time to translate MRI data into these virtual assets is a huge plus. Neurosurgeons, for example, can use this software for neurosurgical preparation, quickly generating individual patient’s MRI data in an interactive, 3D environment. The software also allows for users to be in separate locations, so a future where neurosurgeons can use this environment to consult with other doctors is already here.

Neural pathways have never looked so psychedelic.

Another visualization of the brain that Dr Shattuck can offer is a color coded display of neural pathways. Blue indicates up and down, Red is left to right, and Green is front to back movement. Movement in this case is a loose term, because it refers to the direction of neural pathways. Dr Shattuck has written an algorithm that can track the movement of neural pathways, tracing them as they move from one section to another.

Yet another way to look at your brain.

Additionally, Dr Shattuck can display slices of the brain to look at diffusion tensors. These are color-coded in the same way as described above (Blue up-down, Red L-R, Green front-back). Although many appear spherical, they are in fact elliptical, indicating the direction of water diffusion. Water diffusion, in turn, is directional, indicating the preference of neurons’ movement. This kind of inference using water diffusion is similar to what can be done with muscle fibers as well as neurons. This allows scientists to infer the neural structure of the brain; how well one colored area is connected to other areas of the brain.

Here’s a better example of how these diffusion tensors are elliptical. Pictured here is, yep, you guessed it, the Corpus Callosum.

The corpus callosum, as we all know, is a large nerve tract consisting of a flat bundle of commissural fibers, just below the brain’s cerebral cortex. Just as obvious, corpus callosum is Latin for “tough body.”

As with the ‘normal’ brain display, the fibrous neural network mode can also be simplified to highlight specific sections of the brain. Of course, I use the term “simplified” in an incredibly loose sense here, because there’s really nothing simple about any of this.

Dr Shattuck’s 20-year-long workflow can be seen first-hand at www.brainsuite.org. Originally the focus of his thesis work, Dr Shattuck has continually massaged and improved the software since obtaining his PhD, and is, in my estimation, far from done. He’s eager to explore more applications. This workflow is not limited to brains (although his custom, self-written code that automatically labels brains is, by definition, limited to brains). Dr Shattuck told us that this is theoretically possible to do with any volumetric data, from heart scans to data chips. Dr Shattuck has not yet done this with a heart scan, however, because apparently studying the most complicated object in the universe is enough.

Here is a screenshot of Dr Shattuck’s software with some MRI data loaded up.

The software that Dr Shattuck has developed is available for anyone to use. Here’s a screenshot showing what it looks like with fresh MRI data loaded up. The first thing the software does is isolate the brain, automatically removing the skull and anything else non-brain.

The software has isolated and highlighted the brain.

Once isolated, the software processes the MRI data scan by scan to produce the 3D model. This process is not unlike other 3D modeling processes like photogrammetry, whereby several 2-dimensional images are stitched together to create a 3-dimensional model.

If your MRI scans were to find themselves in an 8-bit Nintendo game, they would look something like this.

The kind of shared-experience environment that Dr Shattuck has developed is everything that I had imagined an environment like this would be: extremely versatile, interactive, immediately intuitive, almost fully automated, and highly instructive. This is the quintessential example of what I envision future doctors will engage with to determine a patient’s brain health. It’s also a very useful pedagogical environment, and gives both instructors and students a look at the brain like never before.

As the software improves, it’s easy to see the myriad applications of just the brain imaging, to say nothing of other objects that can be loaded in and analysed. On a personal note, I’m very much looking forward to working with Dr Shattuck to recreate this environment across campus – with Dr Shattuck in his lab and some VR equipment loaded and ready to go in one of the Library’s classrooms. Fingers crossed he’s got the time to do that with me!

Not too long ago I had the somewhat unique experience of having an MRI scan done. Because of my access to 3D printers, I was curious about obtaining the MRI data to attempt to convert it into a 3D-print-ready STL file. I managed to get that data, and if you’ll now excuse me, I’m off to www.brainsuite.org to convert my brain into a 3D model and then print out a copy of it.

Ain’t the future grand?

Categories
Events Location

Game Lab Showcase: A Visceral VR Venue

The Game Lab is located in the Broad Art Center, Room 3252. Inside are several computer stations with VR headsets: three Oculus headsets and two Vive headsets. Each station had a collection of two to three VR experiences, designed to inspire students by showing them the breadth of experience that is possible with VR. And the breadth was stunning.

Image courtesy: https://www.zerodaysvr.com/

Part of the trouble with a showcase like this is that the experiences themselves are so immersive, so gripping, that you spend half your time being enthralled by just one. Which is exactly what happened to me. Zero Days VR was an artistic immersive documentary experience. The environment was designed and built around an existing feature-length documentary, unsurprisingly named Zero Days. The creators layered the audio on top of an engaging, futuristic technolandscape where the viewer was gently gliding through walls of circuitry that pulsed and reacted to the words being spoken. Occasionally large panels would appear with clips from the documentary itself, but mostly the viewer is spending their time absorbing the landscape, watching it react to the narration and audio of the documentary. Truly engaging, and an exciting future development for documentary film making.

Image courtesy: http://notesonblindness.arte.tv/en/vr

Notes on Blindness is a British documentary from 2016 based on the audio tapes of John Hull, a writer and theologian that gradually went blind and wrote a book about blindness.

Image courtesy: https://www.youtube.com/watch?v=W2eTgbyiY_0

This was the second experience that I had the privilege of being immersed in. John Hull narrates the expanding aural atmosphere, beginning on a park bench and detailing, one by one, the various sounds that bring his world to life. Children laughing, people walking by, birds taking off in flight, joggers passing by, all of these things appear as ephemeral blue outlines as Hull’s voice, with a warm and crackling analog texture, illuminates the experience. It was simultaneously calming and exhilarating, and definitely unlike anything I’d experienced before. And certainly something that could only be appreciated in a VR environment.

Unfortunately (or, rather, fortunately), these two experiences captivated me throughout my time at the Showcase. Although I regret not being able to experience any of the other showcases, the two that I was able to engage with make confident in saying that the list below must be equally as exciting to experience.

Zeynep Abes, curator of this Showcase, has an incredible understanding of the multitudinous possibilities when it comes to VR experiences. Below is a full list of experiences that were on display (in no particular order):

Spheres
Dear Angelica
Gymnasia
Land Grab
A Short History of the Gaze
Museum of Symmetry
Melody of Dust
Chocolate
Davina

Categories
Art Events Resource

Steve Anderson’s Media Arts Lab

Steve Anderson’s brand new Media Arts Lab is located in Melnitz Hall, room 1470. The room is spacious, and comes equipped with seven powerful workstations, each equipped with a VR headset. Both Oculus and Vive equipment can be found.

Oculus donated several headsets to the lab; Vive equipment is also included on some workstations. Here Zizi Li, a PhD student in Cinema and Media Studies, readies the workstations for the Media Lab’s recurring demo series on MWF of this quarter.

The lab is just 2-weeks old, and is still awaiting the installation of a state-of-the-art 7.1 surround sound system. The most prominent feature of the space is a large green screen wall, complete with powerful green lights that help create a shadowless, monochrome background to make it much easier for software to key out the background.

Here is the greenscreen wall without the green lights on. The shadows here could be problematic when trying to key out the background.
With the aid of these powerful green lights…

…the greenscreen wall becomes much more evenly lit. White lights would then be used on the subject being filmed/captured. Motion capture made easy!

This greenscreen area will be used for in part for motion capture. Additionally, on either side of the green screen wall is where the Vive base stations are set up.

The room provides ample space for a VR environment.

In addition to the VR and greenscreen equipment, Steve also has several 360 cameras on hand for students to use. Including this monstrosity, the Google Odyssey, which is an array of 16 GoPros to capture beautiful, hi-def stereoscopic video.

As you can imagine, the footage can get quite large. So large, in fact, that Steve is currently trying to work with Google to allow him to download a 20-minute shot that a student took in Alaska using this rig. The file size is over 1TB, making it impossible for Steve to download the footage after it’s been processed on Google’s servers.
The battery for the camera is, as you can imagine, quite large. It’s about the size of two car batteries put together, and weighs just as much.
The TSA may ask you to open up the case for them to inspect…

Steve and Zizi explain a bit about how the Odyssey works.

By and large the space is shaping up to be an incredibly flexible and useful lab for motion capture, projection mapping, and VR work. Steve is eager to explore as many use-cases as possible, and is happy to speak to other faculty about bringing their students/classes in to work on some VR projects.

Here students from Maja Manojlovic’s class take turns experiencing some VR environments. Professor Manojlovic teaches a course called EngComp133: Writing in Multimedia Environments: Videogame Rhetoric and Design.

The Media Arts Lab is open to visitors on Mondays, Wednesdays, and Fridays for the rest of the quarter from 11a-1p. Various VR films and environments will be on display for visitors to experience.

css.php