I’ve had my conversion moment – I now believe in the power and potential of mixed reality. In this post I want to explain how it happened and what it means for the future of museums.
I first wrote about the concept of “mixed reality” last summer, after attending the Games For Change conference, and after reading a Wired Magazine cover article. At the time I wasn’t sure if MR was just the latest marketing term or if there was something really there there. I think it’s worth quoting in whole, as you can see me trying to get my head around it:
The May cover article in Wired Magazine, “The Untold Story of Magic Leap,” by founding executive editor Kevin Kelly, is a crash course in the latest and greatest in virtual reality (and its relatives). … Kelly brings in the term “mixed reality” (or MR for short) to describe a number of devices that a user wears like a pair of super goggles and, more often than not, while tethered by hardline to a computer. The best I can figure is that MR is augmented reality through a worn device. When I saw Graeme Devine, of Magic Leap, present on his company last week at the Games For Change Festival, he, too, used the term MR (which is where Kelly might have picked it up). Devine defined it as “the mixture of the real world and virtual worlds so that one understands the other.” The power of MR, as I came to understand it, is that AR typically uses one discrete target (like a coin or sticker) to trigger an augmented experience while MR maps the shape of a room and uses that entire map as the canvass upon which to paint. In other words, with MR, the entire surrounding space is in play for layers of augmentation (an exciting prospect for museums).
That excitement I referenced at the end was more aspirational. I certainly wasn’t feeling it. Sure, I saw the value in a device understanding the space around it but, okay, then what? It sounded like a nice feature, but for a limited use case.
Then I started my new position at the Museum. I was no longer reading about Google Tango, Microsoft’s Hololens and the HTV Vive – I was now developing in them and could play with these tools whenever I wanted.
Our first prototype, as a team, brought an AR Shark into our halls using a Hololens:
It was certainly cool watching a physical shark hanging over my head turn into an interactive shark skeleton. But even then I thought of the Hololens strictly as an augmented reality device – a way to bring a single virtual object into the space around our visitors. I still wasn’t getting it.
Over the Thanksgiving weekend, I took the Hololens home. I played with it all weekend, on my own, with my kids, and with my wife. I used the apps (if that’s the right word for them) currently available for download. Some are short experiences you can play in a few minutes while others are designed to last for hours. As we transformed our home into a palette for mixed reality, I finally understood how MR differs from both AR and VR. More importantly, I fell in love with the experience.
First, a bit about Hololens. The device maps the space around you, a limited space but big enough to fill up most of a small room. That map is then saved, so when you switch from one app to another the same map can be used. Of course, the map is neither a location in physical space (a map of Dorothy’s bedroom works just as well in Kansas as it does in Oz) nor a layout of a room. This is a topographical map of the space, as if it were a mountain range. That means, for example, if you map the room then take your chair away, the map will still image there is a chair. This also means that the map will understand that if there is a virtual object between you and the chair, the chair should block some or all of it.
In addition – and I am not clear how this works – Hololens can build awareness of the relationship AMONGST the maps. So if I map two different bedrooms, plus the living room, the Hololens is aware, at some level, that, say, on the other side of this bedroom wall lies my living room. Why this is relevant I hope to explain below.
HOLOGRAMS
The basic app allows you to place pre-constructed 3-D shapes into your mapped spaces. A dog. An octopus. Letters and numbers. Some of these holograms are animated and can be triggered with a tap. A dancing ballerina. A baby knocking down a pyramid of cups. A mime… miming. When you leave the app and return, even if you’ve turned off the device, your objects are still there. If you walk from one map to another – which simply means walking around your house – all the holograms will be seen. Here are some photos of my kids with their creations, placed around our house:
The kids LOVED decorating their rooms, and dragging us in to see what they created. It was like painting their rooms with invisible ink – they could do whatever they want, experience the impact, but there was nothing we needed to clean up. It was important to each of them to be able to personalize their experience – they both went through the tedious process of spelling out their names, one letter at a time. By the end of the weekend, the apartment was full of holograms, a virtual home makeover in every corner. At times I would expect to see that hamster cage on the media cabinet, or the rainbow on the floor of the bathroom, even when I wasn’t wearing the device, simply because I was beginning to accept their presence in our home.
YOUNG CONKER
Even though some holograms offered limited interactivity, most of the time they are experienced as static objects in our shared space. Young Conker, however, offers a great example of how the relationships between the physical and virtual objects can become the context for a game.
Young Conker, aimed at children, is a game in which you control with your gaze where the main character goes. In addition, the player use the Hololen’s tap gesture to make the character do things like jump, flip switches, and such. This is a multi-leveled game in which you collect coins, avoid or defeat enemies, and solve basic puzzles (e.g. collect the missing papers).
Young Conker takes the map of the play space (in my case, the living room) and brilliantly leverages (we need a word for this) locations that highlight the intersectionality of the real and the physical, to enhance the sense of co-presence. For example, Conker and the enemies will jump up and down off my couch in a chase. There’s an ottoman in front of my couch – a launch pad on the carpet will launch Conker onto the ottoman, where there’s some virtual stuff for him to interact with, and then jump back down to the carpet. Where my media cabinet stands on the carpet, Young Conker has placed not just a door (into the media cabinet) but a lowered set of stairs that lead to the door, sunk down into the carpet. Examples like this create a powerful sense of co-presence, and an intense engagement as a result. There is an immersion into the experience, but without isolating me from either the space around me or the people within it.
FRAGMENTS
Fragments is Young Conker for teens and adults. A boy has been kidnapped and we need to rescue him. We work for an agency that has a new technology: memory fragments can be reconstructed and explored in 3-D space. Fragments is essentially a series of 11 escape rooms, each room being a “memory” we explore mapped onto the room around us. I can’t say for sure, but I must have played it for 15 hours before arriving at the end of the rather elaborate, dark, and fascinating narrative.
While interactivity was limited in Conker, Fragments offered a wide range of ways to explore each scene and solve the puzzles. Once the map was established, the kidnapped boy and his abductor were placed across the room, the memory playing for a few seconds then freezing in time, Matrix-style. Then I could walk around my room, or the scene (now one and the same), moving closer or further from objects, interacting with things I found. Over the course of the game, our toolkit expands – listen for sounds, observe heat signatures, explore with a UV filter, etc. Each new tool have me a new way to experience the augmented physical space and motivation to walk around my room.
Points of interest were everywhere – on my floor, on my wall, on my furniture, in my garbage can (oh wait, that’s actually a virtual garbage can). The level of immersion, as a result, was even deeper than I’d experienced with Conker. And the emotional experience of sharing the space with the intense animations – a boy being threatened, my colleague breaking through my ceiling to fire at the abductor, etc. – was intense and reverberated beyond the game play.
And then there were the interstitials – moments both didactic and narrative – that linked the interactive scenes together. In the narrative, a device is placed in the room that generates Skye-like meetings of holograms (in the narrative, we are in different places and we meet with our holograms). At first, the device sat on my Ottoman and my holographic colleagues stood around it. Another time, after I moved the Ottoman and re-mapped the room, the device sat on the ground, and one of my colleagues sat during the meeting on the ottoman. As I played the game over the long-weekend, re-mapping the room each day and often replaying the scene, I experienced the ingenious way the algorithm adapted to the each map and placed people and objects in different, but always coherent, ways.
As with Conker, the experience was immersive, I felt a sense of co-presence with the characters, I could comfortably walk around my room (trusting my own eyes, rather than a virtual barrier), and I could talk with the people around me (such as to ask them to write down some code I had uncovered).
When I got back to the office, I tried to place Fragments in the same place we had offered visitors the AR Shark. It’s hard to see in the video below, but it just didn’t work. The game was designed to work in a typical home. Typical homes have shorter ceilings and smaller rooms than found in the typical museum hall. Typical homes have a rich topography of raised objects – things to sit on, things to put things on – while the museum hall has little to nothing, outside the exhibits themselves. So as you might see in the video below (my apologies for recording with the video off) the sense of immersion is still strong, but the sense of co-presence was lacking. This, of course, is neither a critique of the game nor of Hololens, but it provides a helpful contrast to highlight how best to use it (and how our AR Shark couldn’t take full advantage of the Hololens within that Hall).
So what did I learn from my weekend with mixed reality?
If augmented reality is about co-presence – allowing me to share a space with a virtual character or object – and if virtual reality is about immersion – allowing me to feel transported into a new location – mixed reality brings the best of both into one device. At the same time it suggests solutions for avoiding the challenges each face – AR struggles to get past both the need for a physical target (an image, a coin) and the inability of that target to relate to the space around it; VR struggles with the flip-side of immersion, which is social isolation. It’s always cool to hold a virtual object in your hand, but it’s a different level of co-presence to see it interact with the physical world around you. Watching someone walk in a VR helmet is like watching someone blindfolded; they don’t trust they are safe. With MR, you see the world around you and you move comfortably – perhaps we can call it comfortably immersed.
We all have a lot to learn about what we want from virtual people, objects, and environments and the different tools we’ll need to make it happen. But as we learn more about what visitors need in museums (and I suspect that includes maintaining contact with their social group and with the space around them) and what they desire from virtuality (and I suspect that includes co-presence and immersive experiences), mixed reality suggests a strong path forward.
I find that to be an exciting prospect.