Jul 28, 2017 | By Julia
A team of scientists from Purdue University in Indiana has developed a novel process which uses artificial intelligence (AI) to transform 2D images into 3D models. The technology, called SurfNet, could have applications in the advancement of self-driving vehicles, as well as virtual and augmented reality.
In simplest terms, the AI process uses machine learning and deep learning to transform 2D images it is registering—through a movie camera, for instance—into 3D shapes and environments. This is achieved by “teaching” the SurfNet system 2D images and 3D models in pairs, which enables it to predict the 3D versions of other 2D images it encounters.
Karthik Ramani, the Donald W. Feddersen Professor of Mechanical Engineering at Purdue, explained: "If you show it hundreds of thousands of shapes of something such as a car, if you then show it a 2D image of a car, it can reconstruct that model in 3D. It can even take two 2D images and create a 3D shape between the two, which we call 'hallucination.'"
As he continues to explain, the innovative AI technology could be applied for various applications in the future, including helping autonomous vehicles to better read and understand their environments, improving 3D searches on the web, and automatically creating high quality 3D content for virtual reality and augmented reality environments.
Think about it: how cool would it be to show the system a photograph of a place and then be transported into a 3D version of it by simply putting on your AR/VR headset. "Pretty soon we will be at a stage where humans will not be able to differentiate between reality and virtual reality,” commented Ramani.
The AI deep learning technique can be compared to how a camera or scanner operate using RGB colors, says the research team, only it uses XYZ coordinates to create three dimensional spacial understanding. This method reportedly offers a greater level of accuracy than other 3D deep learning processes which rely on voxels (volumetric pixels).
"We use the surfaces instead since it fully defines the shape,” said Ramani. “It's kind of an interesting offshoot of this method. Because we are working in the 2D domain to reconstruct the 3D structure, instead of doing 1,000 data points like you would otherwise with other emerging methods, we can do 10,000 points. We are more efficient and compact.”
In practice, the technology could mean that robots and things such as self-driving vehicles could be fitted with standard 2D cameras and still have the ability to “understand” their surroundings and environments. At present, however, there is still much work to be done before SurfNet is a viable system.
“To move from the flatland to the 3D world we will need much more basic research,” concluded Ramani. “We are pushing, but the mathematics and computational techniques of deep learning are still being invented and largely an unknown area in 3D."
Posted in 3D Software
Maybe you also like:
- Doodle3D Transform easy-to-use 3D design app launches on Kickstarter
- Scientists reconstruct life-size 3D model of 120 million year old Psittacosaurus with its real color patterns
- Yamaha's My Garage App lets you customize your dream bike in 3D all the way to the dealership
- PITT researchers receive $350K to develop fast computational modeling for metal 3D printing
- Autodesk offers free 3D design software to 1000+ Fab Labs globally
- Airwolf 3D launches beta version of new APEX 3D printing software, test it now
- Metal additive manufacturing software 'Amphyon' uses simulations to offset printing distortions
- North Dakota researchers fight 3D print errors and cyberhacking with image analysis software
- 3DPrinterOS web-to-printer 3D printing platform now integrated with SolidWorks, Fusion 360, more