DeGraRec: 3D Deformable Object Reconstruction Using Graph Neural Networks and Depth Estimation (2024)

Abstract

Obtaining 3D representation of objects from scenes composed out of multiple images or frames is a task that often reacquires advanced hardware and knowledge in 3D rendering. With the rise of machine learning applications, the task became easier to solve for static objects, but there is still a lot of progress to be made regarding deformable or moving objects in scenes.

Our research focuses on creating a framework for extracting 3D deformable objects from 2D scenes. We research the possibility of using multiple graph convolutional operators and depth estimators to extract the object, while also using predefined segmentation masks for the objects in the images. The experiments focus on a dataset from 2017, containing all the requirements of having ground truths for segmentation, depth estimation and the resulting object. The best results we obtained were using a smaller version of MiDaS and the DeGraRec_SAGE variant, on random selections of images from the dataset.

The results were only determined at a point cloud reconstruction of the object. We observed that the models learned important patterns and were able to at least partially determine the form of the object, while problems appeared at a dataset level with missing information within some segmentation masks. The better determination of edges for the object is considered by us an important step in further research.

Citare

@Inproceedings{Loghin2024DeGraRec3D,
 author = {Mihai-Adrian Loghin and A. Andreica},
 booktitle = {Computer Graphics International Conference},
 title = {DeGraRec: 3D Deformable Object Reconstruction Using Graph Neural Networks and Depth Estimation},
 year = {2024}
}

Leave a Reply

Your email address will not be published. Required fields are marked *