The impressive progress of Omniverse and digital twins | Nvidia GTC 24 panel



Driving back and forth between San Francisco and San Jose is never fun. But I’m glad I made the trek from the Game Developers Conference in SF to the Nvidia GTC 24 conference in SJ so I could moderate an interesting panel about the industrial metaverse.

The session was entitled “Digitalizing the world’s largest industries with OpenUSD and generative AI.” In a place full of 16,000 engineers and sprinkled with celebrities like Nvidia CEO Jensen Huang and CNBC’s Jim Kramer, I was glad to see we had a full house in the room.

Our panelists included Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia; Patrick Cozzi, CEO of Cesium, Joe Bohman, EVP of PLM products at Siemens Digital Industries Software; Andy Pratt, CVP of emerging technologies at Microsoft; Christine Osik, head of simulation at Amazon Robotics; Benjamin Chang, VP of global manufacturing at Wistron; and Paulina Chmielarz, industrial operations innovation director at Jaguar Land Rover (JLR).

Progress for digital twins

Our charter was to see how far and fast the world’s largest industries are racing to digitalizing their complex industrial and enterprise processes. After a few years of hearing about the benefits of digital twins and the Pixar-inspired Universal Scene Description (USD), they’re pretty sold on the idea and are racing to establish the OpenUSD standard. With standardized assets, enterprises will be able to collaborate and re-share their assets with each other to achieve greater efficiencies and productivity.

GB Event

GamesBeat Summit Call for Speakers

We’re thrilled to open our call for speakers to our flagship event, GamesBeat Summit 2024 hosted in Los Angeles, where we will explore the theme of “Resilience and Adaption”.

Apply to speak here

If you haven’t heard of digital twins, they have a clear use case, in contrast to the consumer version of the metaverse, which has fallen out of favor. With digital twins, a big company can design a virtual factory and perfect that design before it puts a shovel in the ground on the physical factory. Once that design is good, they make a twin of that design in the real world, with extreme precision.

That requires an accurate simulation so that the simulated digital factory matches the real-world factory. And by outfitting the factory with sensors, the company can capture real world data that can be fed back into the virtual factory, improving its accuracy and saving tons of money by building things right the first time and then continuously updating them to keep up with changing missions and real-world demands.

Our panelists did a good job talking about how far simulation has come and how far it has to go in closing the “SimToReal” gap. How much accuracy do enterprises and heavy industry companies require in their digital twins? It seems like the answer is the simulations must constantly get better.

Here’s an edited transcript of our conversation. And you can watch the video version if you wish.

Nvidia GTC 2024 Omniverse/digital twin panel

VentureBeat: I’m Dean Takahashi, lead writer for GamesBeat at VentureBeat. I’m sure you think I got lost here coming to this enterprise and industrial session. It’s not a video game session, but I’ll try to sneak in a game reference every now and then. I’ve been covering tech for 35 years and games for about 27. It’s interesting to see the intersection between games and tech. This whole technology here is one example of that. We’ll start with introductions with Andy Pratt, from Microsoft.

Andy Pratt: I’m corporate vice president for emerging technology. I have gaming studios in my team as well, and so it’s a safe space. But the main team that I lead is around how we take advanced AI teams, software engineers, and we’re just focused on our customers and partners. How do we take this emerging technology and hit value at scale? To your point, we’re using game technology in heavy assets a lot just now.

Benjamin Chang: I’m from Wistron. We’re mainly a leading EMS provider. My major role would be process engineering and development, and also digital transformation for small factories.

Christine Osik: I’m from Amazon Robotics. I lead teams that are responsible for system operations in those robotics, and for simulation. We build everything from robotic work cell level emulators for development and testing, all the way to building full digital twins.

Joe Bohman: I’m head of PLM products at Siemens. Anybody that saw that boat in Jensen’s keynote, that really cool LNG carrier, that was us. Within PLM we provide a set of tools for our customers to help them design all sorts of great products that we use every day.

Patrick Cozzi: I’m CEO at Cesium. We enable software developers to build 3D experiences and simulations with geospatial data. Providing a global canvas for terrain, buildings, to allow folks to build their applications.

Paulina Chmielarz: I’m the director of photodigital and innovation at JLR. You might have seen in the keynote a small snippet of the Defender, or our famous Range Rovers. We digitalize manufacturing, supply chain, and procurement.

Rev Lebaredian: I lead the Omniverse and simulation team here at Nvidia. We’re building a collection of technologies to create simulations, so that our AIs can be born and raised inside these virtual worlds before we bring them out into the real world. You saw a bit of that in yesterday’s keynote. Jensen explained how the next wave of AI is going to be grounded in reality, in the physics of reality. Omniverse is that bridge for us.

VentureBeat: I’m going to start with a question for Paulina, Christine, and Benjamin. Each of you represents some of the world’s largest industries – automotive, supply chain, warehouse automation, and electronics manufacturing. What are industry technology requirements and priorities to help you with your digitalization efforts?

Chmielarz: I could categorize the priorities for us in three groups. The first, fundamental element is our infrastructure. The technical element of gearing up our factories and our operations with the correct baseline to use the systems. The second element is systems of record. Those big elements of data that we need, the transactions we need. Then all the technology on top of that, that really takes it to the next level. Those are the three big pieces of the story for us.

Osik: At Amazon, our fleet of robotics is essential in bringing better value for our customers. Speed, price, and selection, that’s what drives it. We have a range of priorities, but our first priority is making sure those robots are safe. We use simulations for that. We use digital twins for that. We also use digital twins in design to optimize flow through the buildings. Then we also have the operations. We need to get the same thing as Paulina. We have to collect all the metrics and data coming from the various buildings and distill them for the operators to operate their robotic solutions efficiently.

Chang: For Wistrom, we started our smart manufacturing initiative around 10 years ago. At the beginning, our key concept was, we had to build a data-driven factory. It’s easy to say, but it took us a very long journey to deploy all the IOT technology required, so that we could visualize everything. We can collect the data that we’ve never collected before and derive more insights from the machines, from the sensors, from the meters. That’s the foundation of what we have today. For us, for the manufacturing stack, the data collection, the IOT, would be the first priority. Visualize everything. Then we can go from there to do the automation and simulation.

VentureBeat: I’ve never been to Taiwan, but I saw part of your factory through the Apple Vision Pro. This is not by any means our first Omniverse or digital twins panel. We’ve been doing this for years now. But maybe we could start with Rev, just on how much progress we’ve made. How real do these digital twin simulations need to be for industrial applications?

Lebaredian: It’s a very common question. How close do we need to get to real life in terms of matching exactly what’s happening? The honest answer is it depends. It depends on what problem you’re trying to solve specifically. Every problem is different. The properties of the physics, of the thing that you’re simulating–what’s important is going to be different based on what it is. As a result, there’s no one tool or simulator that’s ever going to solve all the problems. We need a lot of different software and tools to do that.

We’ve been doing this now for many years. From the very beginning, our first problem is just constructing the worlds, bringing all the data together, so you can even start doing the simulation. In order for us to have digital twins, you first need to bring all the data that comes from different tools, from different data sources, from different sensors and IOT devices, all in different formats and in different places, and bring it into one form that is ready to run the simulation.

We settled on Universal Scene Description, OpenUSD, to be the place where all of this data in motion can come together and be harmonized, so that we have that opportunity. The big progress we’ve made as of last year–we now have a standards body with five founding members, including Apple, which was kind of surprising to us. They typically don’t do a lot of these kinds of standards things up front. But they came to us and wanted to start it. We opened it with Adobe, AutoDesk, Pixar of course, where it originated, and soon after we had more than 20 general members who had joined, many of them represented by people on this panel, with Siemens and Cesium and hopefully some others soon.

That’s a huge milestone. It sounds very basic, just getting your data into one form, but for anybody that’s done anything with data in complex systems from all over the place, that’s a huge milestone. It’s very hard.

Bohman: Jumping in on that, when I talk to a CIO, typically they’re going to have 100 to 1,000 different systems. I come in and I’m going to talk to them about PLM and solve all their problems. But the reality is, every enterprise that I talk to has just a wealth of different information in different formats. With USD, this idea of being able to bring that all together is going to unlock a lot of value for enterprises. Most enterprises that I talk to struggle with the diversity of information and how to make sense of it and bring it together. We’re excited to be part of the partnership and driving it forward.

Lebaredian: We were thrilled to get Siemens on board. We need them to help us drive the industrial side of USD.

VentureBeat: I do want to switch into how much progress has been made. Paulina, can you start on that as well?

Chmielarz: All of the technical work and the really big breakthroughs with USD, but also bringing Omniverse, it’s all great to see that. In JLR we do a lot of cases right now. When we, inside the business, talk about what this technology brings to us–at the end of every story I always ask, what does it make people feel like? We recently had an event in one of our innovation labs. We had around 50 guests from the business in the lab. We put up the big screens and presented some of our facilities in Omniverse. It was a lot of work, with Nvidia’s help. Nothing changes more in business transformation than that moment of silence when they see it. That whisper at the end. “Wow!”

Let’s not forget that the technology can make people feel cool about the work they’re doing. We can sometimes take that tiny step forward. It’s the minimum effort needed to get the work done, but sometimes it can be really cool for people.

VentureBeat: Do people sometimes say, “Wow, but I wish it could be better?” Or is it good enough now?

Chmielarz: It’s how it looks. It’s beautiful.

Nvidia uses Omniverse to help design digital twins for data centers.
Nvidia uses Omniverse to help design digital twins for data centers.

Cozzi: We’ve made tremendous progress in the last year. Rev mentioned USD, which is fantastic. There’s also the Metaverse Standards Forum, helping to bring together many organizations and many standards to facilitate collaboration. Even separate from the technology, the mindset of folks wanting to do industrial digitization, wanting to explore that ROI, across so many different industries–certainly Cesium started in aerospace, but we see so much more today in architecture, engineering, and construction.

Osik: As far as progress goes, we started about three years ago. We were looking to find a faster and better way to build our 3D worlds. We had a lot of skeptics within our organization. We had a lot of people doing manual work, a lot of people who have to have their hands on the hardware to make it real. The progress we’ve made convincing them that they can do more with 3D virtualization and synthetic data generation–we’ve made great strides in convincing a lot of skeptics. There’s a massively growing need for more virtual tooling, more synthetic data, and more simulation.

Chang: We started kind of early. When we started the journey there wasn’t very much to use at the time around AI and simulation tools. But when we built up the foundation, the infrastructure, the data collection, it turned out that it wasn’t enough to just have a dashboard, just a system that can visualize everything. We had to make the most use of the data to improve and automate our operations. We started to look for other upgrades. For example, the simulation tools we have today. We try to use 3D models to digitalize all the component elements of our machines, all the material flows, things like that. What if we can change our existing processes? Could that be better? Can we find an outcome without interrupting our processes?

Another example, since we have such a huge amount of data, can we use AI to monitor that and help us make decisions? Change our planning or optimize the recipe for our manufacturing processes? That’s what we’re doing today. We’re seeing very positive results. For the next step, we’re going to use it to deploy across our manufacturing facilities.

Pratt: A good indication, like Rev said, is in the last three weeks, what took place. It was one of my old manufacturing locations that I used to have. We had a partner come in and 3D scan the whole environment. We were doing part design that needed to be laser cut and then autonomously handled and moved through the equipment. Real time telemetry coming off the line. I would have scheduled that project and said, “All right, you have a quarter.” Everybody gets to work. I’m expecting timelines. Three weeks later they showed it to me running live in the facility. They even did embedded capabilities in PowerBI. Each of those technologies is tough, but to go from scan to CAD to workflow to part design to product design to part coming out–it was a controlled exercise, but three weeks is, wow.

Nvidia's Earth-2 software will simulate the planet's climate.
Nvidia’s Earth-2 software will simulate the planet’s climate.

The thing I want to see more now is, how do you take that to full production scale? Some of the points you had about the telemetry, all those work streams. How do we get that quicker, better, faster, and not need as many diverse skills all coming in? There are some great work streams there. I’m happy with the progress, but never happy enough.

VentureBeat: Grand Theft Auto VI is coming out in 2025. That means the team has thousands working on it [and some of them] have been at it for a good 13 years or so [by the time] it comes out. That’s an amazing amount of work going into entertainment. Joe, I talked to your CEO [Roland Busch] in January at CES. He started explaining a bit about the difference between simulation and entertainment. Do you want to take a crack at that for our audience as well?

Bohman: We’ve had a lot of dialogue about–our customers are doing engineering. A great example is, when you build a car one of the key characteristics is coefficient of drag. We have sophisticated tools for doing coefficient of drag that run through solving a bunch of–we’re doing a lot of differential equations and so forth. When we first looked at the OmnivVerse, the reaction was, this simulation isn’t what we’re doing. But as we’ve looked at the Omniverse, it’s a place where a lot of these different simulations can come together. The sorts of simulations we’re doing to calculate a coefficient of drag–if you go out and look at the booths, you have simulations of driving, simulations of what’s happening with electromagnetic interference. Within the Omniverse, we’re bringing in–Nvidia is bringing quite a bit of capability to be a ble to bring simulation together from multiple sources. That’s an exciting conversation.

Lebaredian: Game engines, and what we do with games, which is near and dear to my heart–I worked on that stuff before starting with the Omniverse. Before that I worked in VFX, rendering, making pretty pictures for movies. There’s a fundamental distinction between doing world simulation for entertainment purposes versus doing it to match reality so you can go build things, build intelligence that’s going to play in that environment.

If you’re doing it for entertainment purposes, often the most entertaining images, the things you want, have nothing to do with the physics of reality. If you’re making a movie like Enter the Spider-Verse, the rendering for that movie looks nothing like real life out here, and that’s why it’s cool. You want to design something that’s entertaining, and that might mean it’s magical, it’s super-powered, or all kinds of other stuff. Game engines are designed around that. How do you take the vision of the director, somebody who’s creating, and build something that maybe has a link to reality and physics, but that’s not the priority? The priority is making it look nice.

Doing world simulation, which includes all kinds of physics – fluid dynamics, rendering that’s actually a simulation of how the physics of light and matter work, how light interacts with matter, simulating what a sensor would experience inside that world – has to be physically grounded. That’s a very different system. Up until recently, until we introduced RTX into our GPUs five years ago, it was basically impossible to do that in real time. The introduction of raytracing and RTX into our GPUs and accelerating it, we now have the option of simulating the physics of light and matter with physical accuracy and doing it fast, so we can build these AIs and the systems that we need.

Osik: We have scientists in-house that are doing a lot of complex calculations, a lot of manual work. Some of the skeptics I mentioned–we’re able to build a whole variety of different types of simulation tools. Not just the digital twins, not just the workstation emulator. We built a tool with Omniverse kit that allows our applied scientists to experiment with sensor and camera placement. This is work that used to take weeks and months in the labs. Now they have this great tool that we built very quickly, and it takes them hours. Having the accuracy you get with the simulation tools available today is the important part. As Rev said earlier, there’s a lot of different simulation tools specialized for different things. Being able to interchange the configurations using Open USD is essential, but making sure that it’s the right tool, a simulation tool grounded in the physical world, is the most important part.

Bohman: One last thing on this point, the fusion of gaming and this world. I had this amazing moment with one of the big energy companies. We’d modeled this massive offshore platform. The person had been part of getting the design in–it’s similar to those big ships you were looking at. I was with basically the grandfather of this asset, the person who’s been there during commission. The thing’s 15 years old. He knows every bolt and every inch. He’s so happy with this twin that he now has. We can interact with it in full fidelity. No decimation.

Then this young apprentice comes in who’s going out to do a big capital project on it for the first time. It was incredible to see this person jump on it and just hit it like a game. He was flying around the rig. He was looking at all the details. It was just intuitive to engage in that tooling. We said to him, “Have you ever used the tool before?” He said, “No, but it’s just like a video game.” It was amazing hearing the feedback. He came back after doing the commissioning project. “It felt like I’d been there before. I knew what I was doing. I knew where I was going.” It was amazing to see that motion of the founding father meets the new generation, the fusion of the gamer in the digital asset, and how seamlessly it led to a safer, on-time delivery of the project. It was very special. It’s nice to see the worlds coming together.

Cozzi: Given the Grand Theft Auto reference, all the years that went into that development–I don’t know for sure, but I would guess that a lot of that is on the content creation side. All the artists modeling the environment.

Lebaredian: Right. The grand majority of most game development, even relatively simple games, it’s art content.

Cozzi: I think you see where I’m going with this. On the simulation side, what about–we can acquire from satellites, aircraft, and drones. We can use generative AI. Maybe we’ll be able to play GTA in downtown San Jose.

VentureBeat: If you had to take a guess at what’s the hardest project to do, we have GTA VI. We have Dune 2. We have doing a big factory as a digital twin. What do you think of the challenge there?

Bohman: Every project is challenging in different ways. What I like is that with what we’re doing around the Omniverse and USD–what’s really happening is, we were just talking about this a minute ago. This ability to calculate so much more, so much faster, is what’s producing these wow moments. There’s this set of problems, like simulating a factory floor at scale. For years we’ve had to take the problem and make a smaller problem that we can solve. Now we’re in this world where we can solve the big problems. They’re all hard to solve, but what’s exciting is, we’ll see that the set of problems we can solve is bigger and more exciting and more productive.

VentureBeat: A problem the games industry is facing is that a fair amount of people might have been happy enough with the graphics of Grand Theft Auto IV. But we’re moving on to Grand Theft Auto VI. Do we have diminishing returns setting in? We have this idea of a sim to real gap, which your CEOs might wonder about. How much more real do we have to make these simulations? Do we still have a lot of space to run before we hit those diminishing returns?

Lebaredian: If I may take this one, the first thing you said about “good enough,” that Grand Theft Auto IV was good enough–I’ve been here 22 years. I’ve been hearing that computer graphics in video games are good enough from everyone in the industry since the very beginning. For some reason, every time we make it better, every time we add more graphics horsepower to our GPUs, people seem to buy them. They seem to make better games with more fidelity. Nobody ever goes back. They don’t say, “Okay, I think that was enough. We don’t need any more quality. We’re going to make our new games look like they did 10 or 15 years ago.” I don’t think humans are satisfied with “good enough” until it’s exactly the fidelity and the quality and the complexity of the world we experience around us every day.

Bohman: That boat – and everybody should go and look at the boat, by the way – that was a wow for me, when I saw the boat. When I was with the CEO of that shipyard three weeks ago–he’s running a shipyard. He’s trying to increase his profits. He’s trying to run his business and increase profit. A lot of that hinges on his manufacturing process – how thick the plates are, what they look like when they’re welded. If we can show them, in the Omniverse, how that all fits together and how he can operate his business in a more efficient way, that’s a big change for him. We’re not accurate enough yet for that, and so there’s a lot of room to run.

Osik: That’s where my skeptics come in to train autonomous robots and perception systems to be safe, to operate around humans. You need accuracy. You need real world data. Our engineers and scientists are training on collected and annotated real world data. We started doing synthetic data, and they were not sure it was going to work. But between the realism available today for synthetic data–what they found was they weren’t able to improve beyond a certain point without augmenting with synthetic data. We now have synthetic data of a high enough quality that it can augment the real world data. Someday it would be great if we could do it all with synthetic data.

Isaac Manipulator
Image depicting autonomous mobile robots (AMR) and a manipulator working together to
enable AI-based automation in a warehouse, powered by NVIDIA Isaac. This automation
supports various tasks related to manufacturing and logistics applications.

VentureBeat: Benjamin, what role do you see generative AI playing? Where are you applying generative AI?

Chang: From a manufacturing perspective, we see several areas with very high potential to use generative AI. One example is optical inspection. We have a lot of optical inspection driven by AI decisions. But typically we need a lot of samples. We do a lot of labeling to know what’s a good or a bad result. Most of the time you don’t have time to wait to collect enough defective or passing samples. When you need 5,000 or even 10,000 samples to make the model mature, you just can’t wait a month or two to do that. But what if I had a tool that let me start with five samples, and then I could generate synthetic samples, 10,000 or even more? Then I could speed up the model’s maturity. We’re doing some proofs of concept right now. The results so far are very positive. We think this is one area of high potential where we can use and deploy generative AI to improve optical inspection, and even video inspection.

A second example is design. Housing design for desktop and laptop computers, there are a lot of rules. That used to be handled by mechanical engineers. Where should there be a stiffener? Things like that. We can start with an engineer’s experience and teach a model how to learn that kind of thing. But sometimes you still need an engineer to get involved and spend a lot of working hours with the model. What if we could train a model with preliminary designs generated from existing databases? We’ve been doing testing for a couple of months, and again, we see some very positive results so far. We think generative AI has a lot of potential for the manufacturing sector.

Bohman: How many people out here are looking at retrieval augmented generation in your places? A lot of people. We look at it in three parts. AI is fed on data. One of the observations we have is, so much of the data is in enterprises. It’s not on the public internet. I was talking to some customers last week. One of our customers makes nuclear submarines. I asked, “Is the missile compartment on the web?” He said, “God, I hope not.” The point is, how can we bring AI to those sorts of applications? A lot of the data is sensitive. That’s the value of the customer’s data. That’s why I asked about retrieval augmented generation. We’re doing a lot of work with that to allow our customers to benefit from their data in a secure way, where they don’t have to share that data with the public internet.

As you mentioned, engineers are asking, “Hey, why is it that software engineers have these great code generation tools, but as a mechanical or electrical engineer, I don’t have those tools?” Like you would say in the software tools, we’re putting that in our engineering tools. And the third thing is, we have a lot of stuff for operating factories. Vision systems, things like that. We’re bringing AI into those.

Pratt: Building on that, we had a good chat earlier about how that fusion happens between the different tools. In the Microsoft context, when you have 400 million users on some of your products, the learning you have–you have to take generative AI to that kind of scale. It’s been incredible, but to your point, being able to create those safe data estates was a big focus for the teams. How do we bound the safety of the information, but still have access? And obviously RAG plays a big part. But in this area, I’m excited about what we’re talking about here. You take a team-centric capability, the very specialty models and information that you have in there.

It does still need to be structured, though. Sometimes we get carried away thinking we can just put gen AI on top of everything and the magic happens and it’s trustworthy. As we think about applying these–the importance of getting that data estate structured, understanding the security risks of those data estates. It’s surprising how many tools you already have in your environments that have been doing that for quite a few years. Those are great places to start, in those structured data sets, leveraging the capabilities that are there. It can be very compelling.

I definitely think that’s an area–you look in your businesses and look across, there is great benefit when you already have information well-classified, well-structured. It’s a great starting point. Any project that depends on magically reaching out everywhere and giving you all the context and being perfectly grounded and providing insight, those demos don’t usually go into production well. There’s a lot of learning to do, for sure.

Omniverse works with the Apple Vision Pro.
Omniverse works with the Apple Vision Pro.

VentureBeat: Patrick, this one’s for you. What role does data interchange, 3D standards, interoperability, openness–what role is all that playing? How much is necessary?

Cozzi: Openness, interoperability, 3D standards, it’s central to what we’re doing. At Cesium we have an abundance mentality. We’re swimming in opportunity. We think the pie is growing. It’s the most exciting time to be in tech. In order to solve the challenges in front of us, we need to focus on what we’re best at – geospatial – and we want to work with a whole ecosystem of partners out there to have that solution interoperate. It’s also very empowering to the users and the customers, where they can select their best of breed products to solve their challenges.

I wanted to give one example that’s totally possible today, without writing any code. I don’t think this was possible even a year ago. At the intersection of geospatial and AEC, today you could make a design model in Revit and you could export it with USD Connector, bringing it into USD Composer. Then you could also bring in Google photorealistic 3D tiles. Essentially the 3D geospatial data that goes into Google Earth, you can bring that into USD Composer through the Cesium plugin. You’re bringing all these best of breed technologies from Nvidia, AutoDesk, Google, and Cesium all into one fused environment. That’s through USD. That’s through open standards. That’s very exciting. It’s where the world needs to be.

VentureBeat: I’ll start with Paulina, and I’ll give you a choice. You can talk about tangible results, or what’s coming next.

Chmielarz: I don’t think we need to talk about tangible results, because we already have a lot. Just by looking at it–you go to the exhibition hall and you can see the tangible results, some very cool ones, especially the ship. What I see next is, especially from the business side–when we enter the conversation about bringing such technology into our companies, there’s always that minimum viable team that understands the technology, knows it, and exactly like those gamers, they can fly through it and use it to the best possible result. But not every employee in our companies has those skills and knowledge.

For me, the big part of the story is, how the ones who are not gamers or coders can also use it to their benefit. What are those big changes and transformation programs where we can help the business translate between the technology and the business problems? How can we get those technologies embedded permanently into our operations? It’s not just those systems of records, PLMs and others. It’s how the whole ecosystem moves up, and how we get all of our people on this journey. This is what I see next.

Cozzi: I think we’ve made great progress on interoperability for the visuals, for geometry and materials. To get to that next level for interoperable simulation, it’s all about the semantics and the behaviors, bringing that content through the ecosystem.

Bohman: I’m going to be in the future too. I guess we’re all in the future. But I think we’ll all be amazed, as we see all this technology propelling us forwards, by the improvement in the overall experience of our customers in the enterprise. Having fun and not fighting with all the data in different formats. Things moving slowly. I can’t work with it. AI helping me along. This idea that we can transform this experience. We have a new generation of engineers coming in that really want to interact with these systems in a modern way. We’re all going to be amazed by how modern that’s going to get, and how fast.

Osik: I’m going to go to the future as well, because we’ve seen lots of tangible results at Amazon Robotics. The standardization is going to unlock things. Our organization has more third-party simulators than we can count. Each one is being used by a certain user persona. That tool works for them. For us to be able to accelerate and build a digital twin, we need to be able to take all the findings from all those things and put them together.

Immersive engineering with Sony, Siemens and Red Bull.
Immersive engineering with Sony, Siemens and Red Bull.

The other thing I see going forward between the simulation and generative AI is what we’ll unlock–it’s what Paulina was asking for. How do we put this in the hands of people that aren’t coders, that want to use these tools very easily? Between gen AI and simulation and some of this photorealism, you’re going to see a convergence of technologies. It’s going to start unlocking things.

Chang: Talking about tangible results, we already benefit a lot from getting high throughput, higher EO, things like that. What we need to focus on for the next step–we’re still highly dependent on our technical teams. We need data scientists and data engineers, putting them together to create these systems. Like Christine said, we want to reduce the dependency on the technology stack. We want to make the platform, the UI much more usable. Even if I don’t have a tech background, I can still make use of those tools to create my own models, train my own models, even though I’m not trained as a data scientist. We believe we can enable more people, more employees, to help improve our manufacturing process.

Pratt: I’ve always wanted to be Iron Man. That vision of him sitting in the lab, speaking to life a part, and then around him, seeing that part produced and applied onto a project he’s working on. It would be so exciting to every engineer that could suddenly become a multi-disciplinary engineer, speaking to life a design, collaborating on that design, making that design, showing it to my mom who gets super excited, that’s where I’d love to be.

Lebaredian: I’ll continue on this theme that Paulina started here. One of the most surprising and amazing things to me with the ChatGPT moment and what I saw from it, it’s that a computer can now translate human intent into a program. The fact that you can speak human and instruct this computer and it figures out the software necessary, the algorithm, to execute what you intended, is very profound.

I read a few weeks ago, Jensen was speaking somewhere, and he said that for the last 15 years or so, when people asked what they should go study at university, what they should become, typically he’d say computer science or computer engineering. But he rightly said, you shouldn’t do that anymore. There’s enough of us out there. We’ll be building these systems. What’s going to be more valuable are people who have domain expertise in the sciences – understanding molecules, understanding the stars, understanding fluid dynamics, understanding everything else – because everyone is going to be a programmer. We won’t need a computer scientist to translate their intent into software and tools that will help them.

That’s what’s going to happen. We’re going to see a world very soon where everybody is a programmer. They just don’t know it.

Question: Now that we have generative AI, is it able to develop itself and its own creativity to the point where it could replace you?

Lebaredian: I actually don’t think that it will ever be possible to replace human creativity for humans. If you think about it, there are things that humans have figured out how to do with machines or in other ways that are better than humans, but we still keep going back to the human way of doing it. For as long as we can remember–probably the first people that started to be able to run, they started racing each other. We had the Olympics in Greece and the Olympics now. People still race each other to see who can run faster. We figured out how to ride horses. That was a new technology. Eventually we made machines that can go faster than horses. We made Formula 1 cars. But we’re still interested in how fast humans can run relative to each other.

When it comes to creativity, when it comes to art, the value of those things come from humans and what they project onto it. We care about what other humans can do, what humans create. The stories and the things behind them. There’s no way a computer is ever going to be able to generate that.

Mercedes-Benz is building digital twins of its factories with Nvidia Omniverse.
Mercedes-Benz is building digital twins of its factories with Nvidia Omniverse.

Question: With this wave of generative AI comes a demand for a lot of data. Can you comment on a bottleneck that you’ve experienced when it comes to digital content creations? What technologies do you think will help with that?

Osik: With content creation, Open USD unblocks that bottleneck. We can get content anywhere, and now generative AI can create content for us. I think that one’s not so much. The data bottleneck is a real problem. Everyone has their own data, and for privacy and IP and legal reasons, each company keeps their own. The challenge for each company is how to leverage that data the right way. In the industrial settings now, we’re coming from a time where everything is manual. Some places aren’t even collecting that data into a central place yet. It’s scattered on the edge. That’s a complexity that we have to figure out and move forward. But there are ways to deal with that.

Chmielarz: As we speak right now, in industrial estates, we produce more data than we consume. One of the advantages of upgrading your technology stack is that you have to clean up that data? Within the cleanup, one of the most important activities is to get the business side to tell you which data is for what. All the technical teams and the business teams need to come together and clean it up. Only then can we proceed forward. One of the big advantages I see, rather than an issue, is the fact that we improve the quality of the data that we create, generate, or process. That’s what I saw with us entering Omniverse. The amount of activity my team entered into with metadata improvement was very good. Not only for the Omniverse and USD, but also for the business systems as well. Which for me, and for the whole ecosystem, was a very good result.

VentureBeat: I’ll leave you with one thing. We know that the Earth-2 simulation got a lot of air time at this GTC keynote. I had a chance to ask Jensen this question a while back. I asked about when [Nvidia] gets to the final form of this evaluation of climate change, Earth-2, do we get the consumer metaverse for free? And he said, “Yes.”



Source link

About The Author