Health Location

UCLA Simulation Center – I Hope my Doctor was Trained Here

One of the Sim Center’s many rooms, equipped with cameras, base stations, and some VR equipment set up for our visit.

The first part of our guided tour took place in the space pictured above. It was here that we first met our guides for the tour: Dr Randy Steadman, Medical Director and Founder, and Dr Yue Ming Huang, Education Director who has been with the Center for 19 of its 22 years. It was a very humbling experience, and we are very grateful to both of them for taking time out of their busy schedules to show us some of the work that goes on here.

The UCLA Simulation Center is located on the A-level of the very aptly-named Learning Resource Center, which is situated right in between the School of Medicine and the Ronald Reagan Medical Center. This central location serves the Center well, as medical students and professionals alike from all disciplines visit the Center throughout the year. Medical stuents, nursing students, residents, nurses, practicing physicians, respiratory therapists, etc., all come to the Center for training. 10,000 learners visit each year, totaling nearly 40,000 learning hours that the Center handles annually. For example, every medical student at UCLA does simulation scenarios at the Center every year of their study.

The Simulation Center has been around for almost a quarter-century, and so it was no surprise that VR was only a small part of the work that goes on here. Pictured here is a lecture space with a particularly large TV, which I thought was neat.
Here I am for scale, gesturing towards that neat TV while pointing and laughing at a student who had fallen asleep. WARNING: this is not the last lame joke, nor is it the last lame joke about mannequins being humans.

Although it only amounts to a small portion of the training regimen that students can look forward to undergoing here, the VR that we saw was simultaneously exciting and eery. It’s exciting to think that future medical professionals will benefit from the amazing potential that VR holds for medicine, both in terms of training as well as surgery prep and consultation. It’s eery because I am absolutely certain that this is exactly what an out-of-body-experience feels like when it happens outside of Coachella or Burning Man.

Apologies for the picture-in-picture. In lieu of screenshots, or better yet, the real thing, this is as good as it gets.

Contrary to what you may be thinking, the above image is not your typical OR. This is actually what you experience in one of the VR environments we were shown: a 360 video of a live operation, often narrated by the expert surgeon conducting the procedure. As surgeons who are reading this may already know, it is not often a good idea to have massive, 100″ TV’s suspended above a patient during an operation. You might say that would go against standard operating procedure…

Here’s another picture, this one somehow more terrible than your standard “picture of a laptop” variety. Notice the virtual representation of the VR controller. Users controlled the video with this, while being able to look up, down, and all around. Personally I tried to look everywhere but the surgery being performed.

These VR demos were hosted by a partner of the UCLA Sim Center, a company called GibLib. (At the time, I didn’t think to ask about the name, but thinking about it now, there’s a part of me that hopes the name is a concatenation of a piece of gaming vernacular and a truncated form of Library.) GibLib is a company that hails itself as the Netflix of medical education, a comparison with which I wholeheartedly agree! Both Netflix and GibLib have extensive video libraries, which in GibLib’s case means hundreds of 360 videos showing a large variety of different surgical procedures, often with a brief intro video from the surgeon that will be performing the operation (no Anime, I asked). Both Netflix and GibLib are branching out and beginning to generate their own content, which in GibLib’s case means medical lectures. And finally, you can subscribe to both companies for a trial period, after which you pay a monthly fee.

“Hey honey, what do you want to watch tonight?”

“Oh, I dunno, I really liked that one where Dr Sommer performed a carotid endarterectomy. She really nailed it.”

“Again? Let’s see what else there is. Oh look! It’s a new lecture series by Dr Kim all about Amygdalohippocampectomies.”

Personally I’ve always found Dr Kim’s demeanor too casual given the seriousness of amygdalohippocampectomies, but to each their own. Don’t let that dissuade you, though. After all, I’m the guy that asked GibLib if they had any 360 Anime videos in their library. And that was only after I paid for one year’s subscription.

Rather than highlighting the poster, I’d instead like to point out that there are two cameras on either side of the poster. If you scroll up to the first image in this post, you’ll see several more such cameras all around the room we were in.

Before moving on to all the awesome simulation rooms, I want to mention briefly why the room was ornamented with about 10 of these cameras. Inspired in part by a previous study conducted for the Department of Defense, the Sim Center was teaming up with GE to develop an algorithm that could rate a doctor’s bedside manner by reading their nonverbal communications (body language). The prior DoD study worked with US soldiers working in Afghanistan as they went from village to village and tried to build a rapport with the inhabitants. Although interactions between doctors and patients often take place in cooler, less-sandy environments, the stakes can be just as high if you or a loved one is receiving some devastating news. Studies that grade bedside manner may seem a bit Orwellian (“ten-point deduction, the smile never reached your eyes, Kevin”), but any feedback that doctors can get to improve the patient experience is a good thing. On a related note, progress made in this study is sure to benefit greatly from advancements in VR, facial tracking, and eye tracking.

Dr Maja Manojlovic, in one of the Simulation Rooms, pretending to be a different Dr Maja Manojlovic. Before you shout “Nice going ‘Dr’! That mannequin doesn’t have a heart!” think again. Because it does have a heart and you’re super mean for thinking otherwise. Who cares if it’s a fake heart? You better hope it doesn’t have any higher brain function otherwise you may have just hurt the mannequin‘s feelings.

Reluctant though the group was, we had spent over half of the tour taking turns with GibLib’s VR demo in between stimulating conversation with Drs Steadman and Huang. (Because of my tendency to often write these posts in a sarcastic, almost glib tone that belies the fact that I’m writing for an audience consisting of my professional colleagues, I have to take the time to say that I was not being sarcastic just now – conversations with both of our guides were always engaging and illuminating.) Anyway, as I was saying, we had used quite a lot of our allotted hour in just one place and it was high time that we moved on, because, as we were about to learn, the Sim Center is much more than just one room.

Related side note: it is, in fact, also more than just one floor! Thanks to an incredibly generous gift from this author, the Simulation Center is currently in the process of subsuming the two floors immediately above the A-level (Maxine and Eugene Rosenfeld are also helping to fund this transition). Now, as someone who spends any given portion of any given workday in any one of six possible locations, believe me when I say that both Dr Steadman and Dr Huang are looking forward to the aggregation of their disparate locations. Oh, sure, the good Drs claim that they both enjoy the walks in between buildings, that they both enjoy those brief moments they can spend outdoors. But let’s not kid ourselves: if either Dr Steadman or Dr Huang tell someone that they’re going to go for a walk in the Botanical Gardens for no particular reason, who’s to stop them?

Pictured here is one of our group performing an ultrasound examination on the last person who was foolish enough to try to stand in between Dr Huang and one of her leisurely strolls through the gardens. Actually it’s another mannequin meant to hone students’ ultrasounding skills, not a real person. How can I tell? You’re just going to have to trust in the skills that can only be acquired at the UCLA Simulation Center.

Although the colored 3D heart on the left is an artistic representation of the heart inside the mannequin and not anatomically accurate, the black and white ultrasound image on the right is a real, live image. Also real is the faint purple cross-section shown on the left with the 3D heart. The cross-section moved as the ultrasound moved, giving the student a better visual reference of what area of the heart the ultrasound was currently imaging. Because as far as I can tell, that black and white ultrasound image could be the first images confirming a pregnancy or the location of a brain tumor. I must have been daydreaming during this part of the tour, otherwise I would say something really smart-sounding about left and right ventricles.

Here’s Dr Steadman saying smart-sounding things about left and right ventricles.

Ventricles? Why’d I start talking about…Right! I was talking about Control Rooms and Simulation Rooms! I’d blame my disorganized writing-style on a combination of too much caffeine and not enough sleep, but it’s always this haphazard.

This was the first Control Room we visited where Dr Steadman gave us an in-depth look into the intensely rigorous and strictly controlled training regimen that students are put through. These control rooms are where instructors monitor and control the tightly scripted scenarios that play out on the other side of the observation windows.

Pictured above is one of two control rooms that we saw, with this particular one being the larger of the two. It allowed for three sets of instructors to simultaneously conduct three separate simulation scenarios. Conduct and performance, in this context, are closer in meaning to their musical connotations than one might think. In any given simulation scenario, most everyone in the room was working off of a script and there was only ever one non-scripted person, the learner, in the room at any given time. (My assumption here is that they use the term “learner” because various scenarios played out in these rooms likely involve actors playing the part of a student, not because the Simulation Center can learn doctors real good.)

Here’s a closer look at the equipment used to conduct these performances. To avoid noise confusion on account of surgery being an inherently complicated procedure, conductors use headphones in conjunction with the microphones to communicate with various persons in the scenario. Participants were given earpieces to avoid the learner hearing any of the instructions being given to the scripted participants.

Although any given scenario would last no more than 15 minutes, the scripts that participants were given were quite detailed, often around 10-12 pages in length. In addition to the instructions meant to guide the performances, scripts could also contain one or two ‘branch points,’ in case the learner performed a specific action. (If the learner decides to perform procedure A, tell them that your foot is tingling, for example.) But as I was inquiring more about these branch points, Dr Steadman drove home the point that there would be few, if any, of these branch points. This was an incredibly specific, high-level exercise meant to test a learner’s ability and knowledge on key points under very strict conditions. People like me are probably not the best candidates to participate in these performances, given our tendency to ad lib (as evidenced by this and other posts).

Another Sim Room; in total there are five sim rooms, each meant to resemble the different kinds of environments common to the medical profession.

Although these rooms are equipped with cameras for monitoring and recording the performance for subsequent review, the potential exists to work with the Center’s push for more VR content for its students. In contrast to their real world counterparts, the Center could much more easily place 360 cameras in the Simulation Rooms to record simulation scenarios. Footage could then be quickly converted for use in a shared VR environment, allowing instructors an unprecedented, on-the-ground ability to train students on any portions of the performance that need to be addressed.

Here is Dr Huang in taking us through a room designed for infant and prenatal care.

Additionally, because HIPAA regulations do not account for the privacy concerns of mannequins, permission to conduct these recordings would presumably be much easier to obtain. Thus, the Center has the potential to start building its own library of VR content while simultaneously establishing and fine-tuning the best practices and workflows required for recording simulation scenarios. And these best practices and workflows would be almost identical to real-life situations, assuming there comes a time in the future when the Center is able to overcome the myriad obstacles it currently faces in building its own 360 video library of surgeries for use in VR training simulations.

Here again is Dr Huang, demonstrating the realism of simulation scenarios, leaving some among us slack-jawed in amazement.
Members of our group in another Sim Room, reacting to some lifelike behavior of the mannequin. They blink, are covered in a material that pretty accurately mimics human skin, and they have a pulse, among other unsettlingly anthropomorphic traits. In point of fact, they are so lifelike that even with the bright, fluorescent lighting we humans come to rely upon for survival, walking alone through the halls of the Sim Center can be quite the harrowing experience.

As with all of the places we’ve been fortunate enough to visit this quarter, the work we were shown at the UCLA Simulation Center was cutting-edge, state-of-the-art stuff. Similarly, as with every other place we’ve visited, there’s a feeling of latent potential, a sense of promise hitherto untapped. But rather than feeling discouraged, I’m excited. After all, such a lull in progress is to be expected if you believe in the Gartner Hype Cycle. We’re simply progressing through the Trough of Disillusionment towards the Slope of Enlightenment and onto the Plateau of Productivity. And the coolest thing about all this progress? It’s happening right here, in front of our very own eyes, all over UCLA. You just have to know where to look for it.

If nothing else, the UCLA Simulation Center can always fall back on my idea to simultaneously secure funding and raise awareness of the Center’s work: Halloween Fest 2019 at the UCLA Simulation Center. Imagine walking down all those hallways, surrounded by mannequins…with the lights off.

The only thing scarier than that would be being stuck at a party with these two.

Events Health

Your Brain + Shattuck Research Group = Star Trek Holodeck

Dr Shattuck set up his VR environment in the DIVE at the Brain Mapping Center. The setup consists of four battlestations (powerful PC’s) equipped with Vive headsets and controllers, as well as two base stations. The setup is theoretically portable, and just requires an open space. From right to left: Dr David Shattuck, Dr Maja Manojlovic, Dr Jacob Rosen, Yang Shen, and Dr Ji Ma.

Dr David Shattuck’s VR experience has manifested the virtual reality classroom that we’ve all envisioned in one form or another. The work is the culmination of 20 years worth of research and coding, and presents brain imaging and analysis in a futuristic VR environment. As with other posts, I will do my best to explain Dr Shattuck’s work, but perhaps more than any other lab, this one is something that you just have to interact with for yourself. Words will, inevitably, fail to capture the full experience. Pictures will also fall flat, as I was only able to take pictures of the various PC monitors that displayed what users were viewing in the environment. (At the bottom you’ll find a video that does a better job at capturing a slice of the experience.)

The floating white headsets are other users that are sharing the experience. That big ole gray thing is a brain. The words are labels, and I’m told that the text explains the function of the part that the users selects. I’ve read the text over a couple of times and am still clueless.

The first thing you’ll notice about this environment is the jumble of text. A few things to note here: first, the labels can be toggled on and off. The text in the foreground is displayed only for the user who selected that particular section. The colored labels actually rotate to face the users as they walk around the brain. Lastly, and perhaps most impressive, the labels are automatically generated by software that Dr Shattuck has written. (Again, this is the product of 20 years of work. I almost said culmination, but I dare say that Dr Shattuck is not quite finished.)

Here Dr Shattuck combines two features: the brain with color coded sections along with a 2-dimensional gray MRI scan.

One of the features of the environment that was difficult to capture in a photograph was the way in which Dr Shattuck can manipulate that 2-dimensional MRI scan. With his controller, he is able to manipulate it in any direction, offering users a glimpse at a slice of the brain in any position. Impressively, Dr Shattuck told us that all of the necessary MRI data can be collected in under 30 minutes, and with powerful workstations this data can be used to generate all of the assets that we saw in the environment in 2-3 hours. And, of course, this timeframe will only become shorter as hardware improves.

Another view of the color-coded brain combined with the MRI scan image.

The quick turnaround time to translate MRI data into these virtual assets is a huge plus. Neurosurgeons, for example, can use this software for neurosurgical preparation, quickly generating individual patient’s MRI data in an interactive, 3D environment. The software also allows for users to be in separate locations, so a future where neurosurgeons can use this environment to consult with other doctors is already here.

Neural pathways have never looked so psychedelic.

Another visualization of the brain that Dr Shattuck can offer is a color coded display of neural pathways. Blue indicates up and down, Red is left to right, and Green is front to back movement. Movement in this case is a loose term, because it refers to the direction of neural pathways. Dr Shattuck has written an algorithm that can track the movement of neural pathways, tracing them as they move from one section to another.

Yet another way to look at your brain.

Additionally, Dr Shattuck can display slices of the brain to look at diffusion tensors. These are color-coded in the same way as described above (Blue up-down, Red L-R, Green front-back). Although many appear spherical, they are in fact elliptical, indicating the direction of water diffusion. Water diffusion, in turn, is directional, indicating the preference of neurons’ movement. This kind of inference using water diffusion is similar to what can be done with muscle fibers as well as neurons. This allows scientists to infer the neural structure of the brain; how well one colored area is connected to other areas of the brain.

Here’s a better example of how these diffusion tensors are elliptical. Pictured here is, yep, you guessed it, the Corpus Callosum.

The corpus callosum, as we all know, is a large nerve tract consisting of a flat bundle of commissural fibers, just below the brain’s cerebral cortex. Just as obvious, corpus callosum is Latin for “tough body.”

As with the ‘normal’ brain display, the fibrous neural network mode can also be simplified to highlight specific sections of the brain. Of course, I use the term “simplified” in an incredibly loose sense here, because there’s really nothing simple about any of this.

Dr Shattuck’s 20-year-long workflow can be seen first-hand at Originally the focus of his thesis work, Dr Shattuck has continually massaged and improved the software since obtaining his PhD, and is, in my estimation, far from done. He’s eager to explore more applications. This workflow is not limited to brains (although his custom, self-written code that automatically labels brains is, by definition, limited to brains). Dr Shattuck told us that this is theoretically possible to do with any volumetric data, from heart scans to data chips. Dr Shattuck has not yet done this with a heart scan, however, because apparently studying the most complicated object in the universe is enough.

Here is a screenshot of Dr Shattuck’s software with some MRI data loaded up.

The software that Dr Shattuck has developed is available for anyone to use. Here’s a screenshot showing what it looks like with fresh MRI data loaded up. The first thing the software does is isolate the brain, automatically removing the skull and anything else non-brain.

The software has isolated and highlighted the brain.

Once isolated, the software processes the MRI data scan by scan to produce the 3D model. This process is not unlike other 3D modeling processes like photogrammetry, whereby several 2-dimensional images are stitched together to create a 3-dimensional model.

If your MRI scans were to find themselves in an 8-bit Nintendo game, they would look something like this.

The kind of shared-experience environment that Dr Shattuck has developed is everything that I had imagined an environment like this would be: extremely versatile, interactive, immediately intuitive, almost fully automated, and highly instructive. This is the quintessential example of what I envision future doctors will engage with to determine a patient’s brain health. It’s also a very useful pedagogical environment, and gives both instructors and students a look at the brain like never before.

As the software improves, it’s easy to see the myriad applications of just the brain imaging, to say nothing of other objects that can be loaded in and analysed. On a personal note, I’m very much looking forward to working with Dr Shattuck to recreate this environment across campus – with Dr Shattuck in his lab and some VR equipment loaded and ready to go in one of the Library’s classrooms. Fingers crossed he’s got the time to do that with me!

Not too long ago I had the somewhat unique experience of having an MRI scan done. Because of my access to 3D printers, I was curious about obtaining the MRI data to attempt to convert it into a 3D-print-ready STL file. I managed to get that data, and if you’ll now excuse me, I’m off to to convert my brain into a 3D model and then print out a copy of it.

Ain’t the future grand?

Health Location

The Bionics Lab: It’s as Awesome as it Sounds

The view upon entering. I did not touch the sign, I promise.

Did you know that there exists on campus at least one mad scientist’s lair? That is to say, what appears to be a mad scientist’s lair (don’t want to blow anyone’s cover). There are probably many more besides, but Dr Jacob Rosen’s Bionics Lab looks like a set for an upcoming Ridley Scott film called The Scientist’s Lair or something. And I mean that in the best way possible.

Strewn about the lab is a smorgasbord of stuff. Bits, pieces, bits of pieces, pieces of bits, and, of course, full and partial sets of exoskeletons are strewn about the lab in a manner befitting the minds that combine medicine, physiology, neuroscience, mechanical engineering, electrical engineering, bioengineering, and whatever else I’m missing (which, admittedly, is a lot). (Incidentally, those seem to be the recurring dual-themes of these lab visits: brilliant minds with broad specializations and me missing stuff.)

Check it out! I didn’t get a chance to ask what this thing was, but it sure does look neat.

When you first enter the space, it’s not graduate students wearing exoskeletons that jump out at you, but rather a big, giant plaster-like vertical dome thingy. That was the technical term they used, I believe. Further back in the lab you can see metal scaffolding in much the same shape; these both serve a similar purpose but you could say that one is more…stucco in place than the other. More on that later.

There’s the dome next to the double-doors that lead in and out of the space. Careful readers take note: directly in between the camera and the dome are a set of about nine blue handles. These will be relevant shortly.

I was greeted next to the dome by Dr Ji Ma who was busy making minor adjustments to the four projectors that work together to produce an image up to 4k in resolution. However, Dr Ma was more keen to show me his nifty invention close by: a set of wearable inertia measurement unit, or IMU, sensors. As with the other equipment I was introduced to, these devices were designed to help in the physical rehabilitation of stroke patients.

Here’s Yang Shen, soon-to-be-Ph.D., wearing three of the sensors and demonstrating the quick, real-time virtual response elicited by his movements. The response was very responsive, in other words.

The wearables were wireless and prototypical, and each contain accelerometers and gyroscopes in order to record the speed and range of movement of the subject.

The guts of one of the wearables. Dr Ma told me that he plans to shrink each of these down to smaller than a coin. I was surprised at how much articulation was transmitted to the avatar: shoulder, elbow, and wrist movement all reacted to Yang’s movements in basically real-time.

Like the exoskeleton I’m about to talk about, the idea behind these devices is to aid in the rehabilitation of upper-body movement of patients who have suffered brain damage from a stroke. But given how small these sensors will end up being, and how quick and accurate the response-time was, these devices can easily find their way into other applications in the XR world.

Just to whet your appetite a bit… a bionic hand!!

Astute readers have noticed that in the image of Yang demonstrating the IMU sensors there sits in the background what appears to be a brilliant blue exoskeleton. And so it is!

Look on this thing, ye Mighty, and despair! It is a Mirror Image Bilateral Training Exoskeleton. I haven’t heard back from Dr Rosen yet on my idea to rename it the Blue Bonecrusher. What about the Blue Beast, or just the Beast for short? I’m sure when this thing gains sentience it won’t mind being so informal.

Before I get into what is going to be the grossest of oversimplifications, I want to give a special thanks to Yang Shen, a Ph.D. candidate who’s been with the lab for several years now, for being so patient with me and going over this incredible device for me. So, as I said, the device is used for Mirror Image Bilateral Training. Stroke patients often suffer damage to one or more hemispheres in the brain. When it’s one or the other, movement can become impaired. Based on the theory of neuroplasticity (that means you can make new neurons and new neural pathways), this device is meant to strengthen the damaged hemisphere by allowing the healthy side to provide the proper and/or full range of movement.

With both patient arms strapped in, the operator would select the correct mode, allowing the patient’s healthy side to take the wheel, so to speak, with the damaged side following alongside with precise mirrored movements.

Impaired movement as result of a stroke has to do solely with brain damage, not damage to the muscles (eg muscular dystrophy) or nerves (eg spinal injury). Because of this, it is believed that thinking command thoughts to move your arm coupled with the arm actually moving in the way it’s meant to help rehabilitate the damage hemisphere.

The device also allows for pre-programmed movements (as by a physical trainer, for example), is height- and width-adjustable, consists of three harmonic drives and four Maxon D/C motors for a full range of seven degrees of movement, but only comes in one color. Priorities, Dr Rosen, priorities. Where’s my candy red model?

Here’s Yang, demonstrating how a patient’s arm would strap in to the machine. He told me to tell you all that if you all don’t think this is super-duper cool then he WILL crush you into bonedust. Just kidding. He can’t do that because the device isn’t exactly mobile. Yet…

Remember those nine blue handles I mentioned earlier? No? Well you’re not a very close reader, are you? (Hint: caption, third image)

Pfft, you believe that guy? Anyway, those nine blue handles sit directly across from The Blue Bonecrusher. They’re part of the studies that the lab has been conducting with the exoskeleton, and will also factor in to how this all relates to VR, because as awesome as all of this is, it ain’t exactly the point of this, now is it? As you might have already guessed, the idea here is for patients to reach out, grab the handles, and turn. A healthy person will use up to six degrees of freedom to perform this task, even though a human arm comes with seven. This seventh degree is known as redundancy, not unlike my job in a few years’ time.

Stroke patients often do not lose all seven degrees of freedom. As such, although they may be capable of reaching out and grabbing the handle or otherwise capable of limited movement, the danger lies in reinforcing bad habits. Enter, once again, The Blue Bonecrusher. Instead of allowing patients to unduly rely (and thus reinforce), say, five degrees of movement, the exoskeleton guides and assists patients through a healthy range of motion.

Yang’s also published a study using The Blue Bonecrusher.

Yang’s study on goes something like this (and brings us tantalizingly closer to the whole connection to XR). Let’s assume that a healthy range of movement involves an arm moving from point A to point D, passing points B and C along the way. Now let’s assume that a stroke patient has difficulty with moving their arm from point A to B and from C to D, but has no difficulty moving from point B to C. Yang’s programmed the Beast to assist only during those portions of the range from A to D that the patient struggles with. Yang called it Asymmetric Bilateral Movement or “Assist as Needed” movement.

Now, that’s impressive enough as it is. But the folks in the Bionics Lair (I’m just renaming everything now) like to be precise, and like so many scientists, they like to measure stuff. A lot.

Which brings us, finally, to XR!! Near the ceiling in this image you’ll see part of a set of 10 infrared motion capture cameras. Also note the dome scaffolding, that’s coming up in a bit, too.

In order to more accurately program the machine, the Bionics Lair uses these motion capture cameras to record instances of a healthy human reaching for the handles and turning them (or instances of a stroke patient using their healthy hemisphere to make the move). Then, frame by frame (which translates to something like millisecond by millisecond), the researchers can measure each joint’s position relative to the other joints, ultimately plugging all of that information into the Beast.

But that’s not all! As part of Yang’s Assist as Needed study, he incorporated visual stimulation in a virtual reality environment. After all, there’s more to life than just reaching out, grabbing a random handle, and turning it. Yang provided patients with a variety of virtual objects to reach towards and interact with, tying a virtual ‘string’ between the patients palm and the object, pulling on the patient’s arm with the string to assist with the movement as, you guessed it, as needed. Yang’s not bad at naming stuff, I’ll admit, but I think I’m better.

Enter, once again, The Dome. The story has come full circle, we are back to the beginning! Or is it the end?

If you’re anything like me, you’ve sometimes gotten a bit queasy when trying on a VR headset. Now imagine you’re not just a recovering stroke patient, but more than likely a geriatric one who is about as familiar and comfortable around cutting-edge technology as I am with my mother-in-law. Having said that, I bet none of you will be very surprised when I say that many of these stroke patients were getting nauseous while using VR headsets.

In an effort to keep himself, his lab, his equipment, and ostensibly his students vomit-free, Dr Rosen designed this dome to serve as a replacement for VR headsets. The dome is placed directly in front of patients, thereby giving them near-total immersion while still allowing them to see the ground underneath them. Seeing the ground beneath them helps the patients to stay oriented and to keep their lunches from making a reappearance. The white dome is a fixed prototype; the scaffolding I mentioned and pictured earlier, is meant to be more portable.

I call dibs on whatever Dr Rosen’s Bionics Lair cooks up next because it’s sure to be some kind of exoskeleton. And who doesn’t want an exoskeleton?

Internal skeletons are so passé. By the way, check out Yang’s website for more info about the work he’s doing:

I tried walking out with it but couldn’t do it. This will have to suffice.

For further reading that is far above my ability to comprehend, Yang has shared with me some of the published work that I mentioned above:

Y. Shen, P. W. Ferguson, J. Ma and J. Rosen, “Chapter 4 – Upper Limb Wearable Exoskeleton Systems for Rehabilitation: State of the Art Review and a Case Study of the EXO-UL8—Dual-Arm Exoskeleton System,” Wearable Technology in Medicine and Health Care (R. K.-Y. Tong, ed.), Academic Press, 2018, pp. 71-90.

Y. Shen, J. Ma, B. Dobkin and J. Rosen, “Asymmetric Dual Arm Approach For Post Stroke Recovery Of Motor Functions Utilizing The EXO-UL8 Exoskeleton System: A Pilot Study,” 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, 2018, pp. 1701-1707.
doi: 10.1109/EMBC.2018.8512665

Y. Shen, B. P. Hsiao, J. Ma and J. Rosen, “Upper limb redundancy resolution under gravitational loading conditions: Arm postural stability index based on dynamic manipulability analysis,” 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, 2017, pp. 332-338.
doi: 10.1109/HUMANOIDS.2017.8246894

Ferguson P.W., Dimapasoc B., Shen Y., Rosen J. (2019) Design of a Hand Exoskeleton for Use with Upper Limb Exoskeletons. In: Carrozza M., Micera S., Pons J. (eds) Wearable Robotics: Challenges and Trends. WeRob 2018. Biosystems & Biorobotics, vol 22. Springer, Cham.
doi: 10.1007/978-3-030-01887-0_53