In computer graphics, point clouds from laser scanning devices are difficult to render into photo-realistic images due to lack of information they carry about color, normal, lighting, and connection between points. Rendering a point cloud after surface mesh reconstruction generally results into poor image quality with many noticeable artifacts. In this paper, we propose a conditional generative adversarial network that directly renders a point cloud given the azimuth and elevation angles of camera viewpoint. The proposed method, called pc2pix, renders point clouds into objects with higher class similarity with the ground truth as compared to images from surface reconstruction. pc2pix is also significantly faster, more robust to noise and can operate on a lower number of points.
The code is available at:
point clouds,render,rendering,gan,acgan,point cloud to image,pc2pix,cgan,conditional gan,shapenet,cvpr2019,
0 Comments