This isn’t the Metaverse, at least not yet. Some say the promise of fully augmented and digital worlds have been locked behind false promises and marketing jargon. But the reality is that we’re still figuring this all out.
While gaming sets the bar for the best of these augmented and digital interactions, the fidelity of 3D models and scanning technologies continues to accelerate. It’s this middleware, the Scaniverse, where we will build the foundation for extended and virtual realities.
Let’s connect the dots on scanning technologies, their application, and where they may lead us next.
Capturing the Real World
Our tools and techniques to capture the real world continue to improve. Apple’s introduction of LIDAR into the iPhone 12 Pro in 2020 added the ability for better photos with depth scanning.
This technology, combined with an ever-expanding toolset, has created a strong foundation of applications and algorithms that extend our reality.
- Polycam can be used to capture an entire home in 3D in minutes.
- RealityScan is a powerful solution to create models for use with Unreal Engine.
- Matterport is an enterprise-level tool to create 3D digital twins of properties.
- Move uses artificial intelligence to capture motion or animation in incredibly high fidelity.
- ReSpo.Vision can detect and layer tracking data on live sporting events.
Most of this information is used to create representations of our world or some version of mixed/extended reality (XR) – particularly in gaming.
Niantic, the developer of Pokémon GO, introduced LIDAR into the game in 2020, blending reality so that in-game characters were accurately represented in XR. This scanning activity made every Pokémon Trainer a source to generate accurate, dynamic 3D maps of real-world objects and their relative locations.
Using these scanning technologies and tools ensures we’ll begin with dimensional data. But what if we only have static images or video and want to turn that into a 3D scene or model?
NVIDIA Instant NeRF can do just that, processing images and video into a realistically rendered scene using a new technology called neural radiance fields (NeRF).
In the fly-through below, the model demonstrates near-instant training of neural graphics that are then rendered as a 3D environment in real time.
Avatar reconstruction represents a fundamental step in representing ourselves in digital worlds. A photo-realistic avatar of ourselves may not make sense in Roblox, but there may be many other instances where an accurate model is useful.
Vid2Avatar is a method to reconstruct detailed three-dimensional avatars from monocular, or single-camera, videos. The results are as amazing as the pace of research.
Their approach outperforms existing methods, like ICON and SelfRecon, with a differentiated approach of decoupling people from backgrounds through an algorithm that decomposes the scene.
Imagine taking any online video and creating a realistic avatar from it. Now imagine what may be possible with high-quality, stereoscopic video capture.
Avatars in the Wild
NBA Commissioner Adam Silver introduced an exciting take on this technology by inserting Ahmad Rashad into a game at the 2023 NBA All-Star Tech Summit using a version of Polycam – while clearly ignoring some of the current technical limitations.
It’s a strong example of how all of these dots may connect. In this case, an NBA arena may be outfitted with technology like ReSpo.Vision to capture game action, fans could create a Metaverse-friendly avatar, and they immediately see themselves inserted into game action.
The idea of placing people into games isn’t necessarily new, just being rediscovered through new technology. MLB The Show ’23 is bringing the ability to scan your face and add it to an in-game character using one simple photo.
Connecting the Dots
Let’s connect the dots using the 2023 NBA All-Star Slam Dunk contest and sponsor AT&T. Imagine an experience where AT&T wants to demonstrate its technology and the brand’s promise of “keeping you connected.”
NBA fans visit an AT&T activation to scan their face or body, which is rendered for use in an on-site version of an NBA Jam-inspired dunk contest on a massive arcade cabinet.
Each fan can play their avatar on-site or connect with family and friends, sharing their playable avatar in an online version for them to play.
As part of the activation, you can win a moment where your avatar is featured during the broadcast by simply opting into a marketing or sales funnel. You pick one of your favorite, all-time dunks to “compete” during the Slam Dunk contest.
The NBA and AT&T then renders a version of your taking off from the free-throw line for a Michael Jordan-esque dunk for the entire world to see.
Perhaps, everyone that stopped by the activation receive a video of their avatar mimicking Mac McClung’s winning dunk.
Wouldn’t that be a moment?
This is a practical application of today’s technology. So what’s stopping you from making it a reality?