Categories
Events Uncategorized

UCLA Tech Fair Brief Recap

TechFair 2019 Graphic Banner Image

 

Tech Fair graphic banner

On October 23rd, the UC Learning Centers hosted the annual tech fair for students who were interested in playing with equipment relating to AR/VR. It was also a great opportunity to expose them to various use cases and to encourage them to explore different careers options within STEM. The following are some shots taken at the fair as students trickled in to connect with various booths and play with the demoes.

A crowd of students gather around two virtual reality headset demoes
A crowd of students gather around two virtual reality headset demoes
A group of students huddled around the UCLA Makerspace Booth

A group of students huddled around the UCLA Makerspace Booth
UCLA Makerspace booth with the Oculus Rift demoes in the background.

UCLA Makerspace booth with the Oculus Rift demoes in the background.
Students waiting in line in front of the DJ's stage to have their portraits digitally drawn in real-time.

Students waiting in line in front of the DJ’s stage to have their portraits digitally drawn in real-time.
Closer shot of students using the Oculus Quests.

Closer shot of students using the Oculus Quests.
UCLA Makerspace booth with the Oculus Rift demoes in the background.

UCLA Makerspace booth with the Oculus Rift demoes in the background.

Previous
Next

Along with Makerspace, UCLA Esports, Bird, and other vendors, there was a dedicated booth to showcase Augmented Reality in which Anthony Caldwell (the Manager and Resident Technologist at the Scholarly Innovation Lab in YRL) deployed using Vectorworks. Many of the students were wowed by the ease of use as we showed them how to easily upload a 3D model created through Vectoworks into the cloud where it takes anywhere from 10 – 30 seconds of construction until it appear on your IPAD through their dedicated app. 

More companies are finding it lucrative and time-saving to invest in integrated features that lower the entry level to immersive experiences. It’s important for students to learn about the ever-changing technological space. Tools like those presented allow users to spend more time creating and scoping the content rather than wrestling with the interface itself.

Categories
Events Health

Your Brain + Shattuck Research Group = Star Trek Holodeck

Dr Shattuck set up his VR environment in the DIVE at the Brain Mapping Center. The setup consists of four battlestations (powerful PC’s) equipped with Vive headsets and controllers, as well as two base stations. The setup is theoretically portable, and just requires an open space. From right to left: Dr David Shattuck, Dr Maja Manojlovic, Dr Jacob Rosen, Yang Shen, and Dr Ji Ma.

Dr David Shattuck’s VR experience has manifested the virtual reality classroom that we’ve all envisioned in one form or another. The work is the culmination of 20 years worth of research and coding, and presents brain imaging and analysis in a futuristic VR environment. As with other posts, I will do my best to explain Dr Shattuck’s work, but perhaps more than any other lab, this one is something that you just have to interact with for yourself. Words will, inevitably, fail to capture the full experience. Pictures will also fall flat, as I was only able to take pictures of the various PC monitors that displayed what users were viewing in the environment. (At the bottom you’ll find a video that does a better job at capturing a slice of the experience.)

The floating white headsets are other users that are sharing the experience. That big ole gray thing is a brain. The words are labels, and I’m told that the text explains the function of the part that the users selects. I’ve read the text over a couple of times and am still clueless.

The first thing you’ll notice about this environment is the jumble of text. A few things to note here: first, the labels can be toggled on and off. The text in the foreground is displayed only for the user who selected that particular section. The colored labels actually rotate to face the users as they walk around the brain. Lastly, and perhaps most impressive, the labels are automatically generated by software that Dr Shattuck has written. (Again, this is the product of 20 years of work. I almost said culmination, but I dare say that Dr Shattuck is not quite finished.)

Here Dr Shattuck combines two features: the brain with color coded sections along with a 2-dimensional gray MRI scan.

One of the features of the environment that was difficult to capture in a photograph was the way in which Dr Shattuck can manipulate that 2-dimensional MRI scan. With his controller, he is able to manipulate it in any direction, offering users a glimpse at a slice of the brain in any position. Impressively, Dr Shattuck told us that all of the necessary MRI data can be collected in under 30 minutes, and with powerful workstations this data can be used to generate all of the assets that we saw in the environment in 2-3 hours. And, of course, this timeframe will only become shorter as hardware improves.

Another view of the color-coded brain combined with the MRI scan image.

The quick turnaround time to translate MRI data into these virtual assets is a huge plus. Neurosurgeons, for example, can use this software for neurosurgical preparation, quickly generating individual patient’s MRI data in an interactive, 3D environment. The software also allows for users to be in separate locations, so a future where neurosurgeons can use this environment to consult with other doctors is already here.

Neural pathways have never looked so psychedelic.

Another visualization of the brain that Dr Shattuck can offer is a color coded display of neural pathways. Blue indicates up and down, Red is left to right, and Green is front to back movement. Movement in this case is a loose term, because it refers to the direction of neural pathways. Dr Shattuck has written an algorithm that can track the movement of neural pathways, tracing them as they move from one section to another.

Yet another way to look at your brain.

Additionally, Dr Shattuck can display slices of the brain to look at diffusion tensors. These are color-coded in the same way as described above (Blue up-down, Red L-R, Green front-back). Although many appear spherical, they are in fact elliptical, indicating the direction of water diffusion. Water diffusion, in turn, is directional, indicating the preference of neurons’ movement. This kind of inference using water diffusion is similar to what can be done with muscle fibers as well as neurons. This allows scientists to infer the neural structure of the brain; how well one colored area is connected to other areas of the brain.

Here’s a better example of how these diffusion tensors are elliptical. Pictured here is, yep, you guessed it, the Corpus Callosum.

The corpus callosum, as we all know, is a large nerve tract consisting of a flat bundle of commissural fibers, just below the brain’s cerebral cortex. Just as obvious, corpus callosum is Latin for “tough body.”

As with the ‘normal’ brain display, the fibrous neural network mode can also be simplified to highlight specific sections of the brain. Of course, I use the term “simplified” in an incredibly loose sense here, because there’s really nothing simple about any of this.

Dr Shattuck’s 20-year-long workflow can be seen first-hand at www.brainsuite.org. Originally the focus of his thesis work, Dr Shattuck has continually massaged and improved the software since obtaining his PhD, and is, in my estimation, far from done. He’s eager to explore more applications. This workflow is not limited to brains (although his custom, self-written code that automatically labels brains is, by definition, limited to brains). Dr Shattuck told us that this is theoretically possible to do with any volumetric data, from heart scans to data chips. Dr Shattuck has not yet done this with a heart scan, however, because apparently studying the most complicated object in the universe is enough.

Here is a screenshot of Dr Shattuck’s software with some MRI data loaded up.

The software that Dr Shattuck has developed is available for anyone to use. Here’s a screenshot showing what it looks like with fresh MRI data loaded up. The first thing the software does is isolate the brain, automatically removing the skull and anything else non-brain.

The software has isolated and highlighted the brain.

Once isolated, the software processes the MRI data scan by scan to produce the 3D model. This process is not unlike other 3D modeling processes like photogrammetry, whereby several 2-dimensional images are stitched together to create a 3-dimensional model.

If your MRI scans were to find themselves in an 8-bit Nintendo game, they would look something like this.

The kind of shared-experience environment that Dr Shattuck has developed is everything that I had imagined an environment like this would be: extremely versatile, interactive, immediately intuitive, almost fully automated, and highly instructive. This is the quintessential example of what I envision future doctors will engage with to determine a patient’s brain health. It’s also a very useful pedagogical environment, and gives both instructors and students a look at the brain like never before.

As the software improves, it’s easy to see the myriad applications of just the brain imaging, to say nothing of other objects that can be loaded in and analysed. On a personal note, I’m very much looking forward to working with Dr Shattuck to recreate this environment across campus – with Dr Shattuck in his lab and some VR equipment loaded and ready to go in one of the Library’s classrooms. Fingers crossed he’s got the time to do that with me!

Not too long ago I had the somewhat unique experience of having an MRI scan done. Because of my access to 3D printers, I was curious about obtaining the MRI data to attempt to convert it into a 3D-print-ready STL file. I managed to get that data, and if you’ll now excuse me, I’m off to www.brainsuite.org to convert my brain into a 3D model and then print out a copy of it.

Ain’t the future grand?

Categories
Events Location

Game Lab Showcase: A Visceral VR Venue

The Game Lab is located in the Broad Art Center, Room 3252. Inside are several computer stations with VR headsets: three Oculus headsets and two Vive headsets. Each station had a collection of two to three VR experiences, designed to inspire students by showing them the breadth of experience that is possible with VR. And the breadth was stunning.

Image courtesy: https://www.zerodaysvr.com/

Part of the trouble with a showcase like this is that the experiences themselves are so immersive, so gripping, that you spend half your time being enthralled by just one. Which is exactly what happened to me. Zero Days VR was an artistic immersive documentary experience. The environment was designed and built around an existing feature-length documentary, unsurprisingly named Zero Days. The creators layered the audio on top of an engaging, futuristic technolandscape where the viewer was gently gliding through walls of circuitry that pulsed and reacted to the words being spoken. Occasionally large panels would appear with clips from the documentary itself, but mostly the viewer is spending their time absorbing the landscape, watching it react to the narration and audio of the documentary. Truly engaging, and an exciting future development for documentary film making.

Image courtesy: http://notesonblindness.arte.tv/en/vr

Notes on Blindness is a British documentary from 2016 based on the audio tapes of John Hull, a writer and theologian that gradually went blind and wrote a book about blindness.

Image courtesy: https://www.youtube.com/watch?v=W2eTgbyiY_0

This was the second experience that I had the privilege of being immersed in. John Hull narrates the expanding aural atmosphere, beginning on a park bench and detailing, one by one, the various sounds that bring his world to life. Children laughing, people walking by, birds taking off in flight, joggers passing by, all of these things appear as ephemeral blue outlines as Hull’s voice, with a warm and crackling analog texture, illuminates the experience. It was simultaneously calming and exhilarating, and definitely unlike anything I’d experienced before. And certainly something that could only be appreciated in a VR environment.

Unfortunately (or, rather, fortunately), these two experiences captivated me throughout my time at the Showcase. Although I regret not being able to experience any of the other showcases, the two that I was able to engage with make confident in saying that the list below must be equally as exciting to experience.

Zeynep Abes, curator of this Showcase, has an incredible understanding of the multitudinous possibilities when it comes to VR experiences. Below is a full list of experiences that were on display (in no particular order):

Spheres
Dear Angelica
Gymnasia
Land Grab
A Short History of the Gaze
Museum of Symmetry
Melody of Dust
Chocolate
Davina