Meta releases its 3D reconstruction tool, Implicitron
Meta has announced the release of Implicitron, its latest extension of PyTorch3D.
The new 3D computer vision research tool uses new-view synthesis methods, based on rendering of implicit representations, to enable super-fast prototyping of 3D reconstruction.
Meta’s next step in AR and VR technologies
Using image data, Implicitron can take image data and use it to create accurate 3D reconstructions, without needing huge amounts of data to do so.
It can learn from a fraction of the data that is usually required, and can learn a 3D representation of an object or scene “using a sparse set of combined images”.
“Unlike traditional 3D representations such as meshes or point clouds, this newer approach represents objects as a continuous function, which allows for more accurate reconstruction of shapes with complex geometries as well as higher color reconstruction accuracy”, Meta AI explained in a statement.
The tool has been designed to assist Meta’s current research into real-world applications of AR and VR, for instance, for customers to try on clothes in virtual shopping environments, or to take part in a more immersive re-living of their filmed memories.
“Implicitron aims to serve as a cornerstone for conducting research in the field of neural implicit representation and rendering. This lowers the barrier to entry into this field and enables vast new opportunities for exploration”, Meta AI has stated.
However, Implicitron is currently still in its early research phase, and there are multiple variants of the tool that are still being considered.