Categories
Health Location

UCLA Simulation Center – I Hope my Doctor was Trained Here

One of the Sim Center’s many rooms, equipped with cameras, base stations, and some VR equipment set up for our visit.

The first part of our guided tour took place in the space pictured above. It was here that we first met our guides for the tour: Dr Randy Steadman, Medical Director and Founder, and Dr Yue Ming Huang, Education Director who has been with the Center for 19 of its 22 years. It was a very humbling experience, and we are very grateful to both of them for taking time out of their busy schedules to show us some of the work that goes on here.

The UCLA Simulation Center is located on the A-level of the very aptly-named Learning Resource Center, which is situated right in between the School of Medicine and the Ronald Reagan Medical Center. This central location serves the Center well, as medical students and professionals alike from all disciplines visit the Center throughout the year. Medical stuents, nursing students, residents, nurses, practicing physicians, respiratory therapists, etc., all come to the Center for training. 10,000 learners visit each year, totaling nearly 40,000 learning hours that the Center handles annually. For example, every medical student at UCLA does simulation scenarios at the Center every year of their study.

The Simulation Center has been around for almost a quarter-century, and so it was no surprise that VR was only a small part of the work that goes on here. Pictured here is a lecture space with a particularly large TV, which I thought was neat.
Here I am for scale, gesturing towards that neat TV while pointing and laughing at a student who had fallen asleep. WARNING: this is not the last lame joke, nor is it the last lame joke about mannequins being humans.

Although it only amounts to a small portion of the training regimen that students can look forward to undergoing here, the VR that we saw was simultaneously exciting and eery. It’s exciting to think that future medical professionals will benefit from the amazing potential that VR holds for medicine, both in terms of training as well as surgery prep and consultation. It’s eery because I am absolutely certain that this is exactly what an out-of-body-experience feels like when it happens outside of Coachella or Burning Man.

Apologies for the picture-in-picture. In lieu of screenshots, or better yet, the real thing, this is as good as it gets.

Contrary to what you may be thinking, the above image is not your typical OR. This is actually what you experience in one of the VR environments we were shown: a 360 video of a live operation, often narrated by the expert surgeon conducting the procedure. As surgeons who are reading this may already know, it is not often a good idea to have massive, 100″ TV’s suspended above a patient during an operation. You might say that would go against standard operating procedure…

Here’s another picture, this one somehow more terrible than your standard “picture of a laptop” variety. Notice the virtual representation of the VR controller. Users controlled the video with this, while being able to look up, down, and all around. Personally I tried to look everywhere but the surgery being performed.

These VR demos were hosted by a partner of the UCLA Sim Center, a company called GibLib. (At the time, I didn’t think to ask about the name, but thinking about it now, there’s a part of me that hopes the name is a concatenation of a piece of gaming vernacular and a truncated form of Library.) GibLib is a company that hails itself as the Netflix of medical education, a comparison with which I wholeheartedly agree! Both Netflix and GibLib have extensive video libraries, which in GibLib’s case means hundreds of 360 videos showing a large variety of different surgical procedures, often with a brief intro video from the surgeon that will be performing the operation (no Anime, I asked). Both Netflix and GibLib are branching out and beginning to generate their own content, which in GibLib’s case means medical lectures. And finally, you can subscribe to both companies for a trial period, after which you pay a monthly fee.

“Hey honey, what do you want to watch tonight?”

“Oh, I dunno, I really liked that one where Dr Sommer performed a carotid endarterectomy. She really nailed it.”

“Again? Let’s see what else there is. Oh look! It’s a new lecture series by Dr Kim all about Amygdalohippocampectomies.”

Personally I’ve always found Dr Kim’s demeanor too casual given the seriousness of amygdalohippocampectomies, but to each their own. Don’t let that dissuade you, though. After all, I’m the guy that asked GibLib if they had any 360 Anime videos in their library. And that was only after I paid for one year’s subscription.

Rather than highlighting the poster, I’d instead like to point out that there are two cameras on either side of the poster. If you scroll up to the first image in this post, you’ll see several more such cameras all around the room we were in.

Before moving on to all the awesome simulation rooms, I want to mention briefly why the room was ornamented with about 10 of these cameras. Inspired in part by a previous study conducted for the Department of Defense, the Sim Center was teaming up with GE to develop an algorithm that could rate a doctor’s bedside manner by reading their nonverbal communications (body language). The prior DoD study worked with US soldiers working in Afghanistan as they went from village to village and tried to build a rapport with the inhabitants. Although interactions between doctors and patients often take place in cooler, less-sandy environments, the stakes can be just as high if you or a loved one is receiving some devastating news. Studies that grade bedside manner may seem a bit Orwellian (“ten-point deduction, the smile never reached your eyes, Kevin”), but any feedback that doctors can get to improve the patient experience is a good thing. On a related note, progress made in this study is sure to benefit greatly from advancements in VR, facial tracking, and eye tracking.

Dr Maja Manojlovic, in one of the Simulation Rooms, pretending to be a different Dr Maja Manojlovic. Before you shout “Nice going ‘Dr’! That mannequin doesn’t have a heart!” think again. Because it does have a heart and you’re super mean for thinking otherwise. Who cares if it’s a fake heart? You better hope it doesn’t have any higher brain function otherwise you may have just hurt the mannequin‘s feelings.

Reluctant though the group was, we had spent over half of the tour taking turns with GibLib’s VR demo in between stimulating conversation with Drs Steadman and Huang. (Because of my tendency to often write these posts in a sarcastic, almost glib tone that belies the fact that I’m writing for an audience consisting of my professional colleagues, I have to take the time to say that I was not being sarcastic just now – conversations with both of our guides were always engaging and illuminating.) Anyway, as I was saying, we had used quite a lot of our allotted hour in just one place and it was high time that we moved on, because, as we were about to learn, the Sim Center is much more than just one room.

Related side note: it is, in fact, also more than just one floor! Thanks to an incredibly generous gift from this author, the Simulation Center is currently in the process of subsuming the two floors immediately above the A-level (Maxine and Eugene Rosenfeld are also helping to fund this transition). Now, as someone who spends any given portion of any given workday in any one of six possible locations, believe me when I say that both Dr Steadman and Dr Huang are looking forward to the aggregation of their disparate locations. Oh, sure, the good Drs claim that they both enjoy the walks in between buildings, that they both enjoy those brief moments they can spend outdoors. But let’s not kid ourselves: if either Dr Steadman or Dr Huang tell someone that they’re going to go for a walk in the Botanical Gardens for no particular reason, who’s to stop them?

Pictured here is one of our group performing an ultrasound examination on the last person who was foolish enough to try to stand in between Dr Huang and one of her leisurely strolls through the gardens. Actually it’s another mannequin meant to hone students’ ultrasounding skills, not a real person. How can I tell? You’re just going to have to trust in the skills that can only be acquired at the UCLA Simulation Center.

Although the colored 3D heart on the left is an artistic representation of the heart inside the mannequin and not anatomically accurate, the black and white ultrasound image on the right is a real, live image. Also real is the faint purple cross-section shown on the left with the 3D heart. The cross-section moved as the ultrasound moved, giving the student a better visual reference of what area of the heart the ultrasound was currently imaging. Because as far as I can tell, that black and white ultrasound image could be the first images confirming a pregnancy or the location of a brain tumor. I must have been daydreaming during this part of the tour, otherwise I would say something really smart-sounding about left and right ventricles.

Here’s Dr Steadman saying smart-sounding things about left and right ventricles.

Ventricles? Why’d I start talking about…Right! I was talking about Control Rooms and Simulation Rooms! I’d blame my disorganized writing-style on a combination of too much caffeine and not enough sleep, but it’s always this haphazard.

This was the first Control Room we visited where Dr Steadman gave us an in-depth look into the intensely rigorous and strictly controlled training regimen that students are put through. These control rooms are where instructors monitor and control the tightly scripted scenarios that play out on the other side of the observation windows.

Pictured above is one of two control rooms that we saw, with this particular one being the larger of the two. It allowed for three sets of instructors to simultaneously conduct three separate simulation scenarios. Conduct and performance, in this context, are closer in meaning to their musical connotations than one might think. In any given simulation scenario, most everyone in the room was working off of a script and there was only ever one non-scripted person, the learner, in the room at any given time. (My assumption here is that they use the term “learner” because various scenarios played out in these rooms likely involve actors playing the part of a student, not because the Simulation Center can learn doctors real good.)

Here’s a closer look at the equipment used to conduct these performances. To avoid noise confusion on account of surgery being an inherently complicated procedure, conductors use headphones in conjunction with the microphones to communicate with various persons in the scenario. Participants were given earpieces to avoid the learner hearing any of the instructions being given to the scripted participants.

Although any given scenario would last no more than 15 minutes, the scripts that participants were given were quite detailed, often around 10-12 pages in length. In addition to the instructions meant to guide the performances, scripts could also contain one or two ‘branch points,’ in case the learner performed a specific action. (If the learner decides to perform procedure A, tell them that your foot is tingling, for example.) But as I was inquiring more about these branch points, Dr Steadman drove home the point that there would be few, if any, of these branch points. This was an incredibly specific, high-level exercise meant to test a learner’s ability and knowledge on key points under very strict conditions. People like me are probably not the best candidates to participate in these performances, given our tendency to ad lib (as evidenced by this and other posts).

Another Sim Room; in total there are five sim rooms, each meant to resemble the different kinds of environments common to the medical profession.

Although these rooms are equipped with cameras for monitoring and recording the performance for subsequent review, the potential exists to work with the Center’s push for more VR content for its students. In contrast to their real world counterparts, the Center could much more easily place 360 cameras in the Simulation Rooms to record simulation scenarios. Footage could then be quickly converted for use in a shared VR environment, allowing instructors an unprecedented, on-the-ground ability to train students on any portions of the performance that need to be addressed.

Here is Dr Huang in taking us through a room designed for infant and prenatal care.

Additionally, because HIPAA regulations do not account for the privacy concerns of mannequins, permission to conduct these recordings would presumably be much easier to obtain. Thus, the Center has the potential to start building its own library of VR content while simultaneously establishing and fine-tuning the best practices and workflows required for recording simulation scenarios. And these best practices and workflows would be almost identical to real-life situations, assuming there comes a time in the future when the Center is able to overcome the myriad obstacles it currently faces in building its own 360 video library of surgeries for use in VR training simulations.

Here again is Dr Huang, demonstrating the realism of simulation scenarios, leaving some among us slack-jawed in amazement.
Members of our group in another Sim Room, reacting to some lifelike behavior of the mannequin. They blink, are covered in a material that pretty accurately mimics human skin, and they have a pulse, among other unsettlingly anthropomorphic traits. In point of fact, they are so lifelike that even with the bright, fluorescent lighting we humans come to rely upon for survival, walking alone through the halls of the Sim Center can be quite the harrowing experience.

As with all of the places we’ve been fortunate enough to visit this quarter, the work we were shown at the UCLA Simulation Center was cutting-edge, state-of-the-art stuff. Similarly, as with every other place we’ve visited, there’s a feeling of latent potential, a sense of promise hitherto untapped. But rather than feeling discouraged, I’m excited. After all, such a lull in progress is to be expected if you believe in the Gartner Hype Cycle. We’re simply progressing through the Trough of Disillusionment towards the Slope of Enlightenment and onto the Plateau of Productivity. And the coolest thing about all this progress? It’s happening right here, in front of our very own eyes, all over UCLA. You just have to know where to look for it.

If nothing else, the UCLA Simulation Center can always fall back on my idea to simultaneously secure funding and raise awareness of the Center’s work: Halloween Fest 2019 at the UCLA Simulation Center. Imagine walking down all those hallways, surrounded by mannequins…with the lights off.

The only thing scarier than that would be being stuck at a party with these two.

Categories
Health Location

The Bionics Lab: It’s as Awesome as it Sounds

The view upon entering. I did not touch the sign, I promise.

Did you know that there exists on campus at least one mad scientist’s lair? That is to say, what appears to be a mad scientist’s lair (don’t want to blow anyone’s cover). There are probably many more besides, but Dr Jacob Rosen’s Bionics Lab looks like a set for an upcoming Ridley Scott film called The Scientist’s Lair or something. And I mean that in the best way possible.

Strewn about the lab is a smorgasbord of stuff. Bits, pieces, bits of pieces, pieces of bits, and, of course, full and partial sets of exoskeletons are strewn about the lab in a manner befitting the minds that combine medicine, physiology, neuroscience, mechanical engineering, electrical engineering, bioengineering, and whatever else I’m missing (which, admittedly, is a lot). (Incidentally, those seem to be the recurring dual-themes of these lab visits: brilliant minds with broad specializations and me missing stuff.)

Check it out! I didn’t get a chance to ask what this thing was, but it sure does look neat.

When you first enter the space, it’s not graduate students wearing exoskeletons that jump out at you, but rather a big, giant plaster-like vertical dome thingy. That was the technical term they used, I believe. Further back in the lab you can see metal scaffolding in much the same shape; these both serve a similar purpose but you could say that one is more…stucco in place than the other. More on that later.

There’s the dome next to the double-doors that lead in and out of the space. Careful readers take note: directly in between the camera and the dome are a set of about nine blue handles. These will be relevant shortly.

I was greeted next to the dome by Dr Ji Ma who was busy making minor adjustments to the four projectors that work together to produce an image up to 4k in resolution. However, Dr Ma was more keen to show me his nifty invention close by: a set of wearable inertia measurement unit, or IMU, sensors. As with the other equipment I was introduced to, these devices were designed to help in the physical rehabilitation of stroke patients.

Here’s Yang Shen, soon-to-be-Ph.D., wearing three of the sensors and demonstrating the quick, real-time virtual response elicited by his movements. The response was very responsive, in other words.

The wearables were wireless and prototypical, and each contain accelerometers and gyroscopes in order to record the speed and range of movement of the subject.

The guts of one of the wearables. Dr Ma told me that he plans to shrink each of these down to smaller than a coin. I was surprised at how much articulation was transmitted to the avatar: shoulder, elbow, and wrist movement all reacted to Yang’s movements in basically real-time.

Like the exoskeleton I’m about to talk about, the idea behind these devices is to aid in the rehabilitation of upper-body movement of patients who have suffered brain damage from a stroke. But given how small these sensors will end up being, and how quick and accurate the response-time was, these devices can easily find their way into other applications in the XR world.

Just to whet your appetite a bit… a bionic hand!!

Astute readers have noticed that in the image of Yang demonstrating the IMU sensors there sits in the background what appears to be a brilliant blue exoskeleton. And so it is!

Look on this thing, ye Mighty, and despair! It is a Mirror Image Bilateral Training Exoskeleton. I haven’t heard back from Dr Rosen yet on my idea to rename it the Blue Bonecrusher. What about the Blue Beast, or just the Beast for short? I’m sure when this thing gains sentience it won’t mind being so informal.

Before I get into what is going to be the grossest of oversimplifications, I want to give a special thanks to Yang Shen, a Ph.D. candidate who’s been with the lab for several years now, for being so patient with me and going over this incredible device for me. So, as I said, the device is used for Mirror Image Bilateral Training. Stroke patients often suffer damage to one or more hemispheres in the brain. When it’s one or the other, movement can become impaired. Based on the theory of neuroplasticity (that means you can make new neurons and new neural pathways), this device is meant to strengthen the damaged hemisphere by allowing the healthy side to provide the proper and/or full range of movement.

With both patient arms strapped in, the operator would select the correct mode, allowing the patient’s healthy side to take the wheel, so to speak, with the damaged side following alongside with precise mirrored movements.

Impaired movement as result of a stroke has to do solely with brain damage, not damage to the muscles (eg muscular dystrophy) or nerves (eg spinal injury). Because of this, it is believed that thinking command thoughts to move your arm coupled with the arm actually moving in the way it’s meant to help rehabilitate the damage hemisphere.

The device also allows for pre-programmed movements (as by a physical trainer, for example), is height- and width-adjustable, consists of three harmonic drives and four Maxon D/C motors for a full range of seven degrees of movement, but only comes in one color. Priorities, Dr Rosen, priorities. Where’s my candy red model?

Here’s Yang, demonstrating how a patient’s arm would strap in to the machine. He told me to tell you all that if you all don’t think this is super-duper cool then he WILL crush you into bonedust. Just kidding. He can’t do that because the device isn’t exactly mobile. Yet…

Remember those nine blue handles I mentioned earlier? No? Well you’re not a very close reader, are you? (Hint: caption, third image)

Pfft, you believe that guy? Anyway, those nine blue handles sit directly across from The Blue Bonecrusher. They’re part of the studies that the lab has been conducting with the exoskeleton, and will also factor in to how this all relates to VR, because as awesome as all of this is, it ain’t exactly the point of this, now is it? As you might have already guessed, the idea here is for patients to reach out, grab the handles, and turn. A healthy person will use up to six degrees of freedom to perform this task, even though a human arm comes with seven. This seventh degree is known as redundancy, not unlike my job in a few years’ time.

Stroke patients often do not lose all seven degrees of freedom. As such, although they may be capable of reaching out and grabbing the handle or otherwise capable of limited movement, the danger lies in reinforcing bad habits. Enter, once again, The Blue Bonecrusher. Instead of allowing patients to unduly rely (and thus reinforce), say, five degrees of movement, the exoskeleton guides and assists patients through a healthy range of motion.

Yang’s also published a study using The Blue Bonecrusher.

Yang’s study on goes something like this (and brings us tantalizingly closer to the whole connection to XR). Let’s assume that a healthy range of movement involves an arm moving from point A to point D, passing points B and C along the way. Now let’s assume that a stroke patient has difficulty with moving their arm from point A to B and from C to D, but has no difficulty moving from point B to C. Yang’s programmed the Beast to assist only during those portions of the range from A to D that the patient struggles with. Yang called it Asymmetric Bilateral Movement or “Assist as Needed” movement.

Now, that’s impressive enough as it is. But the folks in the Bionics Lair (I’m just renaming everything now) like to be precise, and like so many scientists, they like to measure stuff. A lot.

Which brings us, finally, to XR!! Near the ceiling in this image you’ll see part of a set of 10 infrared motion capture cameras. Also note the dome scaffolding, that’s coming up in a bit, too.

In order to more accurately program the machine, the Bionics Lair uses these motion capture cameras to record instances of a healthy human reaching for the handles and turning them (or instances of a stroke patient using their healthy hemisphere to make the move). Then, frame by frame (which translates to something like millisecond by millisecond), the researchers can measure each joint’s position relative to the other joints, ultimately plugging all of that information into the Beast.

But that’s not all! As part of Yang’s Assist as Needed study, he incorporated visual stimulation in a virtual reality environment. After all, there’s more to life than just reaching out, grabbing a random handle, and turning it. Yang provided patients with a variety of virtual objects to reach towards and interact with, tying a virtual ‘string’ between the patients palm and the object, pulling on the patient’s arm with the string to assist with the movement as, you guessed it, as needed. Yang’s not bad at naming stuff, I’ll admit, but I think I’m better.

Enter, once again, The Dome. The story has come full circle, we are back to the beginning! Or is it the end?

If you’re anything like me, you’ve sometimes gotten a bit queasy when trying on a VR headset. Now imagine you’re not just a recovering stroke patient, but more than likely a geriatric one who is about as familiar and comfortable around cutting-edge technology as I am with my mother-in-law. Having said that, I bet none of you will be very surprised when I say that many of these stroke patients were getting nauseous while using VR headsets.

In an effort to keep himself, his lab, his equipment, and ostensibly his students vomit-free, Dr Rosen designed this dome to serve as a replacement for VR headsets. The dome is placed directly in front of patients, thereby giving them near-total immersion while still allowing them to see the ground underneath them. Seeing the ground beneath them helps the patients to stay oriented and to keep their lunches from making a reappearance. The white dome is a fixed prototype; the scaffolding I mentioned and pictured earlier, is meant to be more portable.

I call dibs on whatever Dr Rosen’s Bionics Lair cooks up next because it’s sure to be some kind of exoskeleton. And who doesn’t want an exoskeleton?

Internal skeletons are so passé. By the way, check out Yang’s website for more info about the work he’s doing: https://yangshen.blog/

I tried walking out with it but couldn’t do it. This will have to suffice.

For further reading that is far above my ability to comprehend, Yang has shared with me some of the published work that I mentioned above:

https://www.sciencedirect.com/science/article/pii/B978012811810800004X

Y. Shen, P. W. Ferguson, J. Ma and J. Rosen, “Chapter 4 – Upper Limb Wearable Exoskeleton Systems for Rehabilitation: State of the Art Review and a Case Study of the EXO-UL8—Dual-Arm Exoskeleton System,” Wearable Technology in Medicine and Health Care (R. K.-Y. Tong, ed.), Academic Press, 2018, pp. 71-90.

https://ieeexplore.ieee.org/abstract/document/8512665

Y. Shen, J. Ma, B. Dobkin and J. Rosen, “Asymmetric Dual Arm Approach For Post Stroke Recovery Of Motor Functions Utilizing The EXO-UL8 Exoskeleton System: A Pilot Study,” 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, 2018, pp. 1701-1707.
doi: 10.1109/EMBC.2018.8512665

https://ieeexplore.ieee.org/abstract/document/8246894

Y. Shen, B. P. Hsiao, J. Ma and J. Rosen, “Upper limb redundancy resolution under gravitational loading conditions: Arm postural stability index based on dynamic manipulability analysis,” 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, 2017, pp. 332-338.
doi: 10.1109/HUMANOIDS.2017.8246894

https://link.springer.com/chapter/10.1007/978-3-030-01887-0_53

Ferguson P.W., Dimapasoc B., Shen Y., Rosen J. (2019) Design of a Hand Exoskeleton for Use with Upper Limb Exoskeletons. In: Carrozza M., Micera S., Pons J. (eds) Wearable Robotics: Challenges and Trends. WeRob 2018. Biosystems & Biorobotics, vol 22. Springer, Cham.
doi: 10.1007/978-3-030-01887-0_53

Categories
Events Location

Game Lab Showcase: A Visceral VR Venue

The Game Lab is located in the Broad Art Center, Room 3252. Inside are several computer stations with VR headsets: three Oculus headsets and two Vive headsets. Each station had a collection of two to three VR experiences, designed to inspire students by showing them the breadth of experience that is possible with VR. And the breadth was stunning.

Image courtesy: https://www.zerodaysvr.com/

Part of the trouble with a showcase like this is that the experiences themselves are so immersive, so gripping, that you spend half your time being enthralled by just one. Which is exactly what happened to me. Zero Days VR was an artistic immersive documentary experience. The environment was designed and built around an existing feature-length documentary, unsurprisingly named Zero Days. The creators layered the audio on top of an engaging, futuristic technolandscape where the viewer was gently gliding through walls of circuitry that pulsed and reacted to the words being spoken. Occasionally large panels would appear with clips from the documentary itself, but mostly the viewer is spending their time absorbing the landscape, watching it react to the narration and audio of the documentary. Truly engaging, and an exciting future development for documentary film making.

Image courtesy: http://notesonblindness.arte.tv/en/vr

Notes on Blindness is a British documentary from 2016 based on the audio tapes of John Hull, a writer and theologian that gradually went blind and wrote a book about blindness.

Image courtesy: https://www.youtube.com/watch?v=W2eTgbyiY_0

This was the second experience that I had the privilege of being immersed in. John Hull narrates the expanding aural atmosphere, beginning on a park bench and detailing, one by one, the various sounds that bring his world to life. Children laughing, people walking by, birds taking off in flight, joggers passing by, all of these things appear as ephemeral blue outlines as Hull’s voice, with a warm and crackling analog texture, illuminates the experience. It was simultaneously calming and exhilarating, and definitely unlike anything I’d experienced before. And certainly something that could only be appreciated in a VR environment.

Unfortunately (or, rather, fortunately), these two experiences captivated me throughout my time at the Showcase. Although I regret not being able to experience any of the other showcases, the two that I was able to engage with make confident in saying that the list below must be equally as exciting to experience.

Zeynep Abes, curator of this Showcase, has an incredible understanding of the multitudinous possibilities when it comes to VR experiences. Below is a full list of experiences that were on display (in no particular order):

Spheres
Dear Angelica
Gymnasia
Land Grab
A Short History of the Gaze
Museum of Symmetry
Melody of Dust
Chocolate
Davina

css.php