As industrial designer Eric Strebel showed us in "How to 3D Scan an Object, Without a 3D Scanner," if you take a crapload of 2D photographs, smart software can stitch those into a workable 3D model:
But what if you've only got a single 2D shot? Since this is the year 2020, you'd think that by now we'd have Esper machines. That's what Deckard used in the original Blade Runner, set in 2019, to zoom around in a 2D photo to discover data hidden in the third dimension:
We might not have Esper machines yet, but we're getting closer. Last fall Cornell University computer vision researchers Simon Niklaus, Long Mai, Jimei Yang and Feng Liu released a collaborative paper called "3D Ken Burns Effect from a Single Image," detailing a neural network they'd created to pull the trick off. An unaffiliated experimental coder named Jonathan Fly subsequently applied older depth-mapping techniques to the results in the research paper, and yielded this:
Enter a caption (optional)
"Some images do well even with the older stuff, others are riddled with artifacts," Fly writes. "I did use dramatic angles to make the failures stand out, in part because I do love a good artifact."
via BoingBoing
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.