Advances in the space of creating 3D models from 2D photographs are getting downright amazing. This month a team of computer vision researchers from UC Berkeley, UC San Diego and Google Research showed off their NeRF technique--that's Neural Radiance Fields--for "view synthesis" on a variety of objects captured as 2D images, and the level of detail extracted is astonishing:
Enter a caption (optional)
Their research paper is here, and they've posted the code to Github.
We're Getting Closer to Creating 3D Models from Single 2D Photographs
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.