It’s no surprise that Facebook is investing more and more in VR and AR technologies and over the years they have made a number of creative innovations. Recently, however, the company has taken a new approach in one particular aspect of your VR experience.
Conveying Emotions Through Facebook VR Avatars
Facebook has worked on virtual avatars for years now. At F8 2016, Facebook Chief Technology Officer Mike Schroepfer introduced new avatars for Facebook Spaces, replacing the floating blue head in use at the time with an updated model featuring new facial features and lip movement.
At F8 (2018) last year, he debuted Facebook’s efforts into more lifelike avatars being developed by FRL Pittsburgh. In the brief demo, audiences saw two realistic digital people animated in real time by members of the team.
The Facebook Reality Labs team has made significant progress in the two years since Schroepfer debuted their work on lifelike avatars. “We’ve completed two capture facilities, one for the face and one for the body,” says Yaser Sheikh, the Director of Research at Facebook Reality Labs in Pittsburgh.
“Each one is designed to reconstruct body structure and to measure body motion at an unprecedented level of detail. Reaching these milestones has enabled the team to take captured data and build an automated pipeline to create photorealistic Facebook VR avatars.”
With recent breakthroughs in machine learning, these ultra-realistic avatars can be animated in real time.
Engage your (whole) body!
Facebook Reality Labs’ research manager, Ronald Mallet, showed off a video on Wednesday of what he called an early prototype of the next generation of hyper-realistic VR avatars.
In it, a man and a woman moved around a big room wearing VR headsets, while nearly identical, three-dimensional, virtual versions of them — down to their jeans and t-shirts — played soccer on a virtual field with a digital soccer ball. As they raised their hands and kicked their legs in real life, their VR avatars did the same with what appeared in the video to be only a slight lag.
Mallet pointed out that this kind of fully-tracked full-body avatar is still far off in the future. One challenge (among many) is that there isn’t currently a way for people to generate these digital versions of themselves with off-the-shelf sensors.
Facebook will also need to determine how best to keep these avatars secure. Mallet said this may mean using facial or fingerprint recognition to connect a realistic avatar to a person. (Excerpt CNN)
The end goal, however, is to achieve all of this through lightweight headsets, although FRL Pittsburgh currently uses its own prototype Head Mounted Capture systems (HMCs) equipped with cameras, accelerometers, gyroscopes, magnetometers, infrared lighting, and microphones to capture the full range of human expression.
It aims to make the technology usable in future mainstream headsets.
Interact in Virtual World
Using a small group of participants, the lab captures 1GB of data per second in an effort to create a database of physical traits. In the future, the hope is consumers will be able to create their own avatars without a capture studio and without much data either.
The technology has promising possibilities ranging from more realistic virtual interactions to security and authentication but this is still a very long way off, we, however, don’t yet know when, or if, we’ll see a consumer-facing product come out of this Facebook project, but it’s a major milestone for modeling human behavior in virtual environments.