Mixed reality: Is it the blend that will boost human wellbeing?

From simulating the evaluation of cardiovascular patients to turning CT scans into 3D holograms, Forward explores mixed reality and its increasingly innovative use in medicine.

Forward: features are independent pieces written for Mewburn Ellis discussing and celebrating the best of innovation and exploration from the scientific and entrepreneurial worlds.

In the dynamic world of e-sports, fitness game HADO has taken Asia-Pacific by storm. By donning lightweight visors and wrist-mounted sensors, players can see themselves hurl energy balls at each other or raise tall, glowing shields to block opponents’ blasts. The wrist sensors are hooked into a nearby graphics processor that understands the dimensions of the physical playing surface, enabling the players, through their visors, to see and interact with superimposed, digital objects that don’t exist outside of the play-space.

While it may look like just a bit of fun, HADO is actually at the cutting edge of a type of human-machine interaction called mixed reality (MR) – one of three technologies that sit under the broad umbrella of ‘extended reality’ (XR). Although they are frequently discussed as if they are one and the same, these technologies are in fact clearly differentiated:

Virtual reality (VR) refers to headset- or visor-based immersion in a computer-generated environment that completely blocks out the real world.

Augmented reality (AR) refers to headset- or screen-based live views of real-world settings, overlaid with computer-generated objects or data with which the user cannot interact.

MR is a step up from AR and lets the user manipulate or interact with the digital objects or data visible via the headset or screen. Importantly, these graphics are anchored to or interrelate with parts of the surrounding, real-world environment.

MR is now approaching a state of refinement – but its roots stretch back more than a decade. Use cases for MR have now been conceived and demonstrated in a variety of fields.

For example, in the realm of entertainment, MR headset- and monitor-based displays have enabled filmmakers to bring motion-capture characters and other digital assets into the same frame as live-action elements. This often requires them to make real-time adjustments to virtual set extensions and other computer-generated objects to conform with what is physically in front of them. Examples include the Simulcam system developed for James Cameron’s 2009 film Avatar.

A broad range of MR applications have been envisaged for the manufacturing field too, including creating overlaid, interactive instruction manuals and diagrams to assist with building products and conducting quality-control tasks. It could also allow the incorporation of virtual mentors and supervisors into factory-floor training programmes, and the pre-visualisation of products such as furniture or industrial machines to see how they would fit into or interact with their intended real-world settings.

Clinical innovation

In 2009’s Annual Review of Cybertherapy and Telemedicine, scholars at the University of Central Florida unveiled a research paper that set out an experiment in which an MR kitchen, conceived around a sequence of repetitive tasks, was found to be beneficial in the rehabilitation of patients afflicted with post-traumatic stress disorder and traumatic brain injury.

Also in that year, researchers from the Major Carlo Alberto Pizzardi Hospital in Bologna and the Laboratory of Perceptual Robotics in Pisa published a groundbreaking paper in the medical journal Resuscitation. Following a set of pioneering experiments with what they called a ‘virtual reality enhanced mannequin’ (VREM), the researchers determined that blending first-person VR graphics with a physical training tool made for a compelling means of simulating the evaluation of cardiovascular patients.

By introducing VR overlays to a mannequin subject, with the former being spatially mapped onto the latter, the researchers sought to eliminate ‘problems such as the absence of overall body animation, particularly facial interaction and expression, [and] the absence of skin changes (eg, colour, temperature, dampness, sweating, etc)’. These are issues that, in the context of a standard mannequin, preclude the observation of clinical signs in a simulated physical exam.

The paper added: ‘Real-time animations were implemented in order to simulate some of the typical clinical findings indicative of a cardiac arrest, including progressive skin colour changes and mydriasis. These reverted once the manoeuvre of the external cardiac compression was successful.’

Of the 39 users who tried out the VREM prototype – among them doctors, nurses and a selection of lay rescuers – all found the use of VR headsets and gloves acceptable and the overall experience immersive and realistic. While the researchers didn’t actually use the term in their paper, the experiment’s combination of digital and physical elements in a real-time, interactive setting met all the essential criteria for MR.

And it has now paved the way for some exciting developments in the surgical field.

Anatomy of progress

As a musculoskeletal consultant radiologist, Dr Dimitri Amiras spends his professional life thinking about how to capture images of patients’ anatomies that will provide the most accurate guidance for surgical procedures.

Over the past few years, he and his colleagues at Imperial College Healthcare NHS Trust have been trialing MR technology that takes images from CT scans and turns them into digital, 3D holograms that clinicians can see in the operating theatre through either headsets or transparent, flat screens. Indeed, Dr Amiras has been responsible for helping the software differentiate between muscle, bone and fatty tissue.

In an evolutionary leap from the VREM experiment, the holograms can be mapped in real time – and at 1:1 scale – onto the bodies of the patients from which they are derived, providing greater diagnostic and procedural clarity.

‘The most important thing about using MR,’ Dr Amiras tells Forward, ‘is that it puts information within its proper, spatial context. With a standard picture on a screen, you have to look at it, put it in your mind, then look at the patient. Or with text, you read a report, create something in your mind and then look at the patient.

‘But MR puts the data in the patient, to scale, so that the surgeon can understand the anatomy. And we hope in the future we’ll be able to use it to guide interoperative procedures with greater accuracy than we can with older methods.’

“MR puts the data in the patient, to scale, so that the surgeon can understand the anatomy”

For Dr Amiras, MR’s facility for enabling several members of a surgical team to see exactly the same images through their headsets ‘suddenly means that everyone’s speaking the same language. When I was a medical student, we would go into the operating theatre and huddle around the surgeon to see what he was up to. But MR puts everyone on the same page.’

In Dr Amiras’s view, the tech naturally lends itself to training. In Imperial’s digital learning hub, he says, they have now created a sort of simulated environment. As he explains: ‘MR enables us to put physical material in [the environment] and superimpose part of someone’s anatomy onto it.’

For example: ‘We had a tubful of agar jelly which, if you get it right, has the same consistency as flesh. And we were able to project a patient onto the jelly and then reconstruct a CT biopsy – something that would normally involve putting someone in a scanner. But with MR, we were able to simulate the whole thing quite realistically, just by using a tub of jelly, some biopsy needles and a few stickers to track the needles.’

Dr Amiras adds: ‘I know other researchers are working on MR tech for simulating high-stress environments such as trauma calls and for providing remote, surgical support – potentially from a different country [than the patient is in]. So I think two big areas for training are simulations, and then recording those sessions to create an experiential learning database.’

Tom Furnival, a patent attorney in the engineering team at Mewburn Ellis, Cambridge, is one of those who follow future developments for MR with interest: ‘As it moves from the lab into the real world, the powerful impact MR can have on existing processes is becoming increasingly apparent. In particular, it’s incredibly exciting to see the application of MR in the context of medicine, where it is enabling access to a wealth of previously inaccessible or context-less data for surgeons.’


 

Written by Matt Packer