This paper explores the transition from latent-space diffusion models to pixel-space diffusion generation . We address the "flying pixel" artifact—a common byproduct of Variational Autoencoder (VAE) compression—by performing diffusion directly in the pixel domain. By leveraging semantics-prompted diffusion , our approach ensures high-quality point cloud reconstruction from single-view images. 1. Introduction
How high-level semantic cues guide the diffusion process to differentiate between overlapping object boundaries.
We propose a framework that operates entirely within pixel space to maintain edge sharpness and spatial integrity. 2. Methodology: Pixel-Space Diffusion Pixelpiece3
Detailed analysis of how bypassing latent-space compression removes "flying pixels" at depth discontinuities. 3. Quantitative and Qualitative Evaluation
Visual evidence of reduced noise and sharper depth transitions compared to state-of-the-art latent models. 4. Conclusion Pixelpiece3
Comparison against NYU Depth V2 and KITTI datasets.
Moving diffusion to the pixel space represents a significant leap in the fidelity of generated depth maps. This has direct implications for high-resolution 3D reconstruction and augmented reality applications where depth precision is paramount. Pixelpiece3
Traditional monocular depth models like Marigold often suffer from blurry edges and depth artifacts due to the lossy nature of VAEs.