PointAvatar: AI-Based Deformable Point-Based Head Avatars

PointAvatar: Reading Raw Captured Videos & Converting Into Deformable Point-Based Head Avatars

A
by Alan Jackson — 1 year ago in Artificial Intelligence 3 min. read
2018

PointAvatar methods are fascinating and it is especially used in the gaming industry simultaneously in the Metaverse.

Abstract

The accomplishment of achieving a 3D head avatar from raw sources (video or image) possesses a number of breathtaking processes.

The hardest part is to make it realistic, animatable, and relightable. Current methods rely on explicit 3D morphable meshes (3DMM) or exploit neural implicit representations.

However, both are fantastic but lag to offer a smug experience in development. For instance; fail to provide thin hair strands, the ability to read light-shifting terminology, shading, and so on.

It is because some applicants are limited by fixed topology and potentially do not offer fully immersive deform and are inefficient to render, too.

This is why 3D designers and developers in large-scale industries are using PointAvatar as not a software or a service, but a robust method that is capable of offering a deformable point-based portrayal that unravels the source color into intrinsic albedo and normal-dependent shading.


In alternate terms, it bridges the slit that is currently being faced i.e. implicit representation, merging high-quality geometry, and ease of deformation and rendering efficiency.

Will sure be helpful for 3D designers and aid them in their projects with efficient rendering, easy animation, and surface geometry.

Method/Function

This renowned invented method, deformable point-based head avatars from video; generate animatable 3D avatars using monocular videos from multiple sources including smartphone cameras, DSLR, laptop webcams, and internet videos.

The method is overwhelming as it provides state-of-the-art quality in challenging cases. For example; presenting an accurately matched head avatar of the player in the Metaverse.

When captured from a monocular medium or any other subject, it may perform a number of expressions and poses which this so-called method learns jointly.

pointavatar method figure

  • First, a point cloud (3D head geometry) learns about the appearance of the subject in a canonical space.
  • Second, using a deformation network which brings in change in the subject by transforming the point cloud into new poses using a FLAME expression extracted from the RGB frames.
  • At last, a shading network gives a vector based on the point normals in the deformed space. These three jointly can be optimized by comparing the rendering with the inputs.
Also read: 2021’s Top 10 Business Process Management Software

Previews

I have collected and shared the results of how a 3D avatar model looks with various parameters from learned 3D point clouds, Animation, and Relighting, respectively.

Learned 3D point clouds

Animation

Relighting

Future Scope

There is no doubt in saying that AI is the future of this planet.

With ever-growing advancements in gaming, 3D modeling, robotics, and the practices of developing 3D avatars would open up wide-ranging applications in communication and entertainment.

In the entertainment industry, PointAvatar would bring significant growth. While in communication context, it would provide a new way of conversation experiences.


Final Thought

3D head avatars have been used since the development of 3D games began. After so many years, the industry has gained a plethora of growth, and enterprises anticipated more and more 3D character design, 3D avatars, and more.

With a new method; deformable point-based head avatars, there would be a probability of completing challenging tasks in a minute.

Image & Video Source: zhengyuf.github.io


Alan Jackson

Alan is content editor manager of The Next Tech. He loves to share his technology knowledge with write blog and article. Besides this, He is fond of reading books, writing short stories, EDM music and football lover.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Copyright © 2018 – The Next Tech. All Rights Reserved.