It all started with a fish tank. It was the 1990s, while I was in grad school working on a project for University of Michigans Advanced Technology Lab in their EE department. Someone at another university had aimed a webcam at the office fish tank and published the feed on the early web.
You could hit the refresh button every 30 seconds on primitive browsers like HotMetal to see if the fish had moved. Our research goals at the time were a few orders of magnitude more sophisticated. We were trying to make telerobots controllable over unreliable networks.
Whether looking at the fish long distance, or controlling a robot long distance, I got a taste early on of how technology could mediate everyday life in strange, unexpected and compelling ways including the feeling of being somewhere faraway and interacting with remote environments.
VR and AR have come a long way since kludgy telepresence and webcasted fish, but my commitment to the reality in virtual reality hasnt diminished. I believe strongly that the convergence of health and wellness and VR and AR a dark horse sector is an important field.
More broadly, as an investor, I am interested in non-obvious applications of VR and AR. In a prior life I trained as an actor, so the role of tech to augment human storytelling e.g. narrative through-lines, emotive characterization, scene-setting is something Im very passionate about, as well.
I start with two simple questions these days when meeting entrepreneurs to brainstorm their new VR/AR idea. I call it my Nerdy Novelty-ware Test, and it goes like this:
The answer to these questions can reveal deal-breakers for me, as long-term habitual usage and ground-breaking augmentation of abilities via HMD are essential. This is why VR/AR entertainment doesnt hold anywhere near as much interest for me as real-world, real-life use cases do.
And although Ive been reluctant to pull the trigger on any early-stage VR deals out of the half dozen Ive seen to date (mostly focused on gaming or 360 video), Im no VR skeptic not by a long shot. There are five big areas that I am particularly excited about: consumer VR experiences, the VR office, VR retail, VR therapeutics and VR toolboxes. Youll note that none of those are inherently about entertainment, which is where most peoples imaginations go when talking about VR.
Here is where I think the potential is for the longer-term reality in virtual reality.
Opportunities: enterprise, healthcare, construction, design
In healthcare, Im not just interested in helping doctors rehearse for surgeries, but also in the notion of explaining procedures to patients ahead of time. If you remember the movie Fantastic Voyage from the mid-1960s science-fiction grand master Isaac Asimov wrote the novelization what if youre doing an MRI scan and then you fly inside your own body and take a VR fantastic voyage within yourself?
Its hugely interesting for pregnancy cycles when you can go inside the womb and virtually visit your soon-to-be-born baby. Im thinking, in part, of VRSEs early demo, as well as Alexander Tsiaras 2010 terrific talk, featured onTED, and what a hugely emotional experience itd be to go inside the womb and see your own babys fetus at giant scale within the womb. Just imagine a VR equivalent.
VR job sims are a natural, as foreshadowed by the early kitchen chef demo on the Vive. As they become increasingly high fidelity, these will become VR assessment and training tools for real jobs, and then someday VR telecommuting interfaces for actually doing the real work within VR.
Data visualization in VR is also a natural killer app.
Other industries to consider include construction and architecture. Im dying for a VR/AR remodeling tool for myself when trying to decide amongst hundreds of textures for floors and wallpaper. Its total hell, and makes total sense for a VR or AR visualization interface.
Data visualization in VR is also a natural killer app, especially for visualizing future moves. Imagine playing chess or Go, with the AI in the VR engine showing all possible next moves and interesting branching outcome paths three moves afterwards. Think about how this might apply for the enterprise world and business intelligence, where you are looking at a strategic map of international or market segment expansion, with possible next moves and resulting scenarios modeled out and visualized for you in real time.
VR-enhanced memory castles are another area where data visualization can also be very helpful, as the human brain is wired to better recall things when spatially laid out. In fact, thats how so-called super-memory competitors work: they create memory castles out of mnemonics, placing numbers, facts, figures and names within imaginary spaces and rooms. They imagine elaborate constructions or buildings stacked with these, allowing them to recall up to 1,000 pieces of data. VR is the perfect type of interface for creating and navigating these memory castles. A file cabinet of every memory or every fact you need arranged more spatially is probably more efficient than Evernote or a search tool or any of those sorts of technologies we use today.
Opportunities: advertising and marketing, virtual tourism and travel, real estate, education
Weve already seen many compelling demos for VR tourism, as well as AR-based advertisements. I think were also going to see the rise of AR place-tagging, and this may also apply for applications like VR/AR art. Maybe well see new forms of digital street art and graffiti popping up, as imagined by cyberpunk authors like William Gibson and Charles Stross.
The social applications are numerous. Imagine Foursquare-like check-ins with social Easter eggs you leave for your friends or others to see. This might reinvent the whole practice of geocaching: Where we have real-life games out there that have physical-object geocaching, now you might do more digital geocaching there, as well.
This type of place-tagging might yield innovations in tourism, real estate showings and also in education, when you can add a museum-exhibit quality overlay or historical references to any location. Id love to see timelining combined with places: Imagine if you wanted to see what 5th and Market in San Francisco looked like in 1972 and then in 1960 and then 1875, where you can actually overlay Google Earth meets Street View plus historical timelines that would be an interesting path to experience different places (or perhaps an interesting twist to ARG gaming, where multiple layers of reality and timelines exist for the same location).
Opportunities: retail, fashion, hyperlocal info
In terms of long-term upside and disruption, commerce and shopping will be big areas for VR and AR. Here were going to see AR heads-up displays overlayed on top of existing real-world apps we already use habitually, whether its bringing Amazon-like product info to everything you see around you, or Yelp, Zillow, OpenTable or NextDoor metadata for hyperlocal information.
What if I want to know more about the stylish chair in front me what does it cost, whos the maker? Or who lives in that sick house Im walking past, what did they pay for it, whats the reported property tax value? Or how about an instant virtual walk-through tour? Any and all of these questions will be instantly answered on-demand via real-time overlay data for the things you see around you in the real world, starting with apps for smartphones, but then soon expanding to wearables and headsets/visors.
This also applies quite well for experiential purchasing. Anything that showrooming is good for would be well-suited for VR or AR types of shopping experiences. You probably wouldnt need to enter a Sharper Image or Brookstone retail location, as you could get most of the product demo experience in a VR setting for unique novelty products that youve never seen before.
VR/AR visualization will provide a powerful functionality for this space, as well, enabling users to instantly picture potential purchases rendered onto their home environment or even their own bodies, allowing for rapid virtual fittings of more SKUs than ever (like previewing through all possible accessories and outfits for your avatar in a video game).
Opportunities: social media, UGC, communications
Were going to see some really good lessons and techniques from informational interfaces and widgets used in the MMO gaming world applied to the real-world and people around us, so that we can see their profiles, stats, identity, context, affiliations, background, common connections and much more all in a heads-up-display-style format.
Thats something I always loved about MMOs whenever you moused over a character, all their relevant stats would pop up so you could instantly identify: friend or foe, somebody I want to meet, what do we have in common, what do we want to trade. Imagine that and apply it to the world around you.
The fun part is: What if your next version of About.me is you choosing what you want to show for your overlay data when other people see you and how you allow them to connect with you.
What would be the VR/AR versions of emoji?
This leads to some exciting questions: What would be the VR/AR versions of emoji? What would stickers be? Would they be animations of 3D objects in overlay? Animated GIFs would probably have some sort of analog in this world perhaps more elaborate reality filters or visualizations? What would a next-gen selfie in VR/AR be? How would you tune your reality or what you see? What would the equivalent of an Instagram experience be like for capturing moments and sharing them? What would messaging, la SnapChat, be in a VR type of context? (For a bleaker example of how tuned reality could play out, watch the latest season of Black Mirrorson Netflix.)
While existing platforms like Facebook, Apple and YouTube will try to add VR into their existing systems, I believe that whole new networks could be created from the ground up, natively for these platforms. All sorts of other unexpected wonders and hits will emerge that Im pretty excited about on the UGC and social front.
One thing Id love to see: Do you remember that scene from The Matrix when Neo says, I need guns, and then suddenly gun racks start flying out? Think of a VR version of Scribblenauts, that wonderful, whimsical Nintendo game from back in the day. The magic of Scribblenauts was typing in anything you could think of, out of thousands of possible objects, and having a cute, interactive 2D version of it drop out of the sky right in front of you because Nintendo already had that object within its libraries.
We have that same potential in VR because we have thousands of open-source public 3D objects already out there. Imagine ingesting these into a VR sandbox engine that can auto-normalize 3D models, and also has a physics engine that can process basic nouns, verbs and adjectives, allowing a user to intuitively type or say what they want to appear and happen in the world: I wanna ride a whale! triggers a scaled 3D whale to materialize beneath you and begin to swim around. Actually, lets make it a robot whale! (The 3D whale model gets re-skinned as MechaWhale). And it shoots lasers from its eyes! And so on. If any entrepreneurs are working on something like this, Id love to talk to you about it!
Opportunities: enterprise, AI, telepresence, automation
Such technology works perfectly well in the enterprise context (and in fact, we may see significant initial revenues much more quickly than from the still-fragmented and nascent consumer market). Lets say youre a salesman or that youre negotiating a corp-dev deal, with an assistive AI and AR overlay feeding you helpful data, or even the next line to say in real time. Its augmenting how you work as a professional.
If you havent seen it yet, Creative Control, which was an Amazon original movie, is quite interesting. It features something called Augmenta, a fictional augmented reality glass. The killer application on Augmenta is an avatar-maker. Imagine that from social media, Instagram photos, videos and so forth, you could stitch together an interactive avatar of anybody (yourself or anyone else). Now bring to mind a love interest or a crush, and picture how youd interact with an avatar youve crafted for that person, simulating whatever interaction you want to have with him or her or potentially anybody else out there.
Why would you ever not want to see the world filtered?
This gets to some pretty interesting and twisted scenarios, where you combine a MeBot artificial intelligence and your digital avatar representation. You could author your own digital ghost and, in essence, become immortal. If you get hit by a truck tomorrow, your grandkids could still talk and interact with you, because if it had your whole corpus of all your past email messages and social media output, that AI could probably simulate you. The MeBot could probably get you about 85-90 percent accurate, where somebody could interact with you and it would speak in the same tone and language that you would use.
This does get to the notion of mixed reality, and then what I think of as remixed reality. If you could auto-enhance and autotune everyone and everything around you, why would you not want to do that all the time? In terms of physical appearance, if you could make every six into a perfect nine, do you think youd be a kinder person? Would you treat people better?
If you take that to an extreme, why would you ever not want to see the world filtered? Nobody takes an unfiltered Instagram photo anymore. This will create a whole new class of social problems, including VR addiction, in which case this would provide an opportunity for VR addiction clinics.
Opportunities: healthcare, therapy
Another non-obvious upside of VR will be VR-enabled digital therapeutics. Take the example of an Epley Omniax chair. There are only a few hundred of them around the world. Theyre hugely expensive, and they help people treat vertigo and things like that. This actually can be achieved also as a VR app with a wearable to help people get over things like vertigo. Theres a startup called Vivid Vision treating lazy eye with VR headsets. VR therapeutics will be an area Im looking at, and could be an alternative to more expensive healthcare devices out there.
Consider Adam Gazzaley, a neuroscientist at UCSF, and what hes done with Akili. Hes collecting evidence to show that these actually are high-efficacymeans, and can be prescribed. The trigger point for that is the moment after you have enough clinical data collected, and submitted to the FDA and gotten it approved when it can be reimbursed under an existing payer code. Then its game on. At that point youve unlocked billions of dollars. Thats the day doctors can recommend to you a game, or digital therapy, in addition to a drug or other procedure. Thats when this market blows wide open.
A friend of mine rebuilt in VR a scene from The Matrixwhere Morpheus teaches Neo to jump over buildings. In the process I think he fell down something like 122 times down 20 stories. He said he inadvertently cured his fear of heights in the process. What other phobias could we cure? Public speaking? Spiders? Airplane crashes?
Opportunities: dating, human-computer interfaces, human augmentation, cybernetics and post-human evolution?
If you remember the movie Her, it illustrated the notion that virtual and augmented reality doesnt have to happen through just the eyes. It can happen through other senses, like the ears, so I kind of think of audio-driven interfaces as hearables. With hearables, youd have other forms of these guardian angels, a 24/7 therapist or a coach or a Cyrano-style assistant telling you what to say or do next. If it happened in the voice of Scarlett Johansson as an upsell, even better.
This could also lead to another social problem where, just like in the movie Her, people got addicted to their operating system. Theres this notion that youll end up tuning the world exactly to things you want to see, in exactly the way to see or hear them. I jokingly call the result not eHarmony but meHarmony. If this chatbot or AI engine knows exactly what you want to hear all the time, and it tells you everything you want to hear, why would you listen to anybody else?
(Speaking of which, fake news or personally tuned news in the post-factual social news era, anybody? Perhaps our future world leaders will be highly charismatic celebs armed with AI and chatbots that can deliver whatever it is that each individual wants to hear via messaging social media, VR/AR, at-scale and in real time.)
What I want to leave you with is this: If we are virtualizing and augmenting reality, its worth asking what is reality to us? How do we experience it, and then how do we alter that experience? Fundamentally, our experience in reality is really through our senses. If thats the case, what other senses can we hack and play with? Were going to have see-ables. Thats the state of the market today, but well soon have hearables, feelables, ingestibles, smellables, implantables and much, much more.
The point is, all of these are senses that can soon be measured, used as input-output devices and then further stimulated, simulated, transposed and used to remix and augment our reality. Those are some of the things that Im sort of seeing on the frontier. And I think this will create billions and billions of dollars worth of market capitalization of new companies and technologies out there, not to mention ushering in a whole new era of post- and trans-human existence or our next technological existential crossroads!
Read more: https://techcrunch.com/2017/01/24/its-called-virtual-reality-for-a-reason/