But while vision seems to be outpacing capability, there are plenty of hints we’ll get there sooner or later.Ī mini AI that can turn a pile of polaroids into a 3D scene is just one of them. It’s not uncommon to see mashups of all tech’s top buzzwords-NFTs, the metaverse, AI, blockchain-in one headline. Granted, it’s getting increasingly difficult to draw the line between marketing and sales pitches and serious developments. Indeed, not content with creating digital replicas of individual scenes, the company has said it’s building a digital twin of the Earth too. Given NVIDIA’s chips excel at AI and graphics, the company is well-positioned to have a hand in it all. And not just books, music, photos, documents, and payments-but people, places, and infrastructure. The digital world is bleeding into the real world, and vice versa. Other highlights included a system for self-driving cars that aims to map 300,000 miles of roads down to centimeters by 2024 and an AI supercomputer the company says will be fastest in the world upon release (a claim also made by Meta recently).Īll this fits snugly into a larger narrative. The demo was part of NVIDIA’s developer conference this week. The speed and size of neural networks matter in such cases, as huge algorithms requiring prodigious amounts of computing power can’t be used by most people, nor are they practical for robots and cars without lightning-quick, dependable connections to the cloud. It could also be used to make high-fidelity avatars people can import into virtual worlds or to replicate real-world scenes in the digital world where designers can modify and build on them. NVIDIA imagines the technology could find its way into robots and self-driving cars, helping them better visualize and understand the world around them. This upped performance by a few orders of magnitude-their algorithm runs up to 1,000 times faster, according to an NVIDIA blog post-without sacrificing quality. According to the paper, the new method, dubbed Instant NeRF, exploits an approach known as multi-resolution hash grid encoding to simplify the algorithm’s architecture and run it in parallel on a GPU. NVIDIA’s recent contribution, outlined in a paper, puts NeRFs on performance enhancing drugs. The result is a continuous 3D space stitched together from the original images. That is, the neural net fills in the gaps between images with best guesses based on the training data. In short, a NeRF takes a limited data set-say, 36 photographs of a subject captured from a variety of angles-and then predicts the color, intensity, and direction of light radiating from any point in the scene. The work builds on neural radiance fields (NeRFs), a technique developed by researchers at UC Berkeley, UC San Diego, and Google Research, a couple years ago. NVIDIA’s image rendering AI runs on a single GPU. Large models like GPT-3 train on hundreds or thousands of graphics processing units (GPUs). Second, the AI itself is diminutive in comparison to today’s hulking language models. NVIDIA’s neural network takes no more than a few minutes to train and renders the scene in tens of milliseconds. Earlier AI models took hours to train and minutes to render 3D scenes. As a demo, the team transformed images of a model holding a Polaroid camera-an ode to Andy Warhol-into a 3D scene.įirst, it’s very speedy. #3d vintage scene full#And this week, chipmaker NVIDIA performed another magic trick.īuilding on previous work, NVIDIA researchers showed how a small neural network trained on a few dozen images can render the pictured scene in full 3D. These days we can do a bit more, like bringing vintage photos to life à la Harry Potter. There was a time when converting an old photograph into a digital image impressed people.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |