Supercomputing / HPC Simulation Accelerates Autonomous Driving – Microsoft & Ansys

Speaker:

Ladies and gentlemen, please welcome to the stage Walt Hearn.


Walt:

Can you imagine if two 787s crashed every day and everybody on board perished? Well, that’s what’s happening on the roadways around the world. Approximately 1.2 million people die every year in car accidents. And this is causing a major economic loss to the world. And because of that, we’re seeing a major transformation inside the automotive industry towards autonomous vehicles. Now, not only is there a human life impact, but also there’s a huge economic opportunity. Today inside the automotive industry for autonomous vehicles, we’re generating about $54 billion in revenue. And by 2026, we’re looking to get to close to $500 billion in revenue. Now, the traditional automotive companies and OEMs are going to change the way they develop cars. So is Ford and GM going to be the leaders or are the ride hailing companies like Uber and Argo AI going to take forward and bring the next generation of cars to market?


Walt:

But one thing we are seeing is that today when we look at studies, people do believe that in the next five years, autonomous vehicles will be actually safer than the human drivers. But to bring these cars to market, there’s a big challenge. If you hear from the top CEOs at a lot of the automotive companies, they’re saying, “Look, we have to physically drive a car about 8.8 billion miles to test for all the failure modes to make sure that we can bring a safe autonomous vehicle to market.” Well, if we look at the top companies like Waymo, they’re driving about a million physical miles a year. So you take a million divided by 8 billion, 8,000 years, we’re never getting one of these cars to market. Right? And so what we realized is that simulation is going to be a key part of bringing autonomous cars to market. But not just any simulations.


Walt:

You see gaming engines and different variations of simulations, but it’s about bringing, being able to physically accurate represent a virtual environment for autonomous vehicles. And this is where ANSYS comes into play. And so what we do is we run simulations where we characterize the streets, the guardrails, the cars, the paints, and we put it into one environment. So what we’re simulating is actually predicting what you see in the real world. Now to do this, we have to look at all of the different systems that we have to bring together for an autonomous vehicle. And today when you look at your car, you see your radar and a couple of cameras. But autonomous vehicles have a whole system of different sensors. You have radars, you have cameras, you have LIDARs, you have ultrasonics that all serve different purposes to make an autonomous car safe, right?


Walt:

But not only do have to develop all of these different components, you have to design them, you have to worry about how they integrate on platform and then how they perform as an overall system. Right? And this is a complex challenge because you have one company developing the radar and another company putting the entire system together to make sure that it operates correctly. And so the sensors are a key component of that. But beyond that, then you have the hardware, you know the printed circuit boards that are a part of this overall system. And when you look at your cell phone, right? Your cell phone is designed to last three years in not that harsh of a condition. But when you look at an automobile, that hardware is going to have to last 10 years, it’s going to have to operate in the snow and in the desert. And it’s can’t fail because if it fails, then it’s going to be a tragic loss of life. So that’s a major challenge.


Walt:

And from there you have all of this software that’s being developed. And when you look at an airplane, an airplane has millions of lines of codes. But when you look at an autonomous vehicle, it has hundreds of millions of lines of code. And we have to develop… When you’re looking at an autonomous vehicle, one of the key softwares is perception algorithms that’s being developed today. And what perception algorithms do is they identify the objects inside of a scene. So you see the boxes and they’re identifying is that a cat, is that a tree, is that a sign? And then you take the perception algorithms and they go into sensor fusion, you have perception algorithms from all of the different sensors coming together and going into the policy software and the policy software makes the decision of should this car speed up, should it slow down, should it stop and should it turn right?


Walt:

And so you have the sensors, you have the hardware, you have the software, and you have to bring that all together to make sure that you could develop it. So you have to imagine a physical environment that’s very challenging. But if you can do it in a virtual environment, if you can bring that all together and test all of that in a physically accurate environment, that enables you to get these cars to market much faster. And so what you’re seeing here is you’re seeing a virtual environment from ANSYS where we’re simulating the radars in the top right, the LIDARs in the bottom and the cameras on the right. And so we’re simulating this environment to see how all of the systems perform and it’s also behind the scenes checking to see if the software’s operating correctly.


Walt:

And so from there though, you’re just seeing one sensor, right? You’re seeing one radar, you’re seeing one LIDAR, you’re seeing one camera. But when you look at an autonomous system, you have eight radars, 10 cameras, five LIDARs, so you have this big set of sensors. Well to simulate that, it’s not that easy. You can’t just do that on your normal desktop. Each sensor requires a GPU, right? So you have a big set of GPU, a big set of HPCs that you have to bring together. And that’s just to simulate one scene, right? But we have to go beyond that. We have to simulate thousands of scenes overnight because we have to vary the road conditions, the weather conditions, and we have to run these overnight so that we can develop better sensors, we can develop better algorithms. And so that’s why HPC is so critical because we have to be able to parallelize these and run these millions of variations overnight.


Walt:

So the key is, is that to physically drive a car, we’re never going to get autonomous systems, right? We have to be able to virtually build these autonomous systems and test them to bring autonomous cars to market. And so next up I’d like to introduce Nidhi Chappell, our partner from Microsoft, who’s going to talk about how we can take this virtual environment to Azure to make it possible. So thank you, yeah.


Nidhi:

Thank you, Walt. Thank you. So I’m Nidhi Chappell, I run the HPC product team of it in Azure. Walt talked a little bit about the challenge of autonomous, the promise of autonomous, but I also want to dig in a little bit deeper into what does it really mean to bring these pipelines into market? Walt talked a little bit about the workflow, and I am going to do a double click on this because it’s really important to understand how different autonomous pipelines are. So we talked a little bit about you have a test fleet that you are trying to optimize before you can really get any vehicle autonomously on the road. And these test fleets are really doing sensor collections, whether it’s radar, LIDAR, all sorts of cameras on it. Then they’re implementing or testing perception on it, how quickly you can perceive different objects. Then you’re putting some sense of what is the policy that the car is supposed to be doing and then you are actually doing some action on it.


Nidhi:

Pretty simple workflow if you think about it. But behind this workflow there’s a lot going on and I think we talked a little bit about this. The sensors that you are collecting, each test fleet is collecting anywhere from 10 hours to 24 hours of driving information. Now most of our customers don’t have one test fleet, right? They actually have lots of test fleets geographically dispersed collecting information in all sorts of scenarios. So you can imagine the amount of data they are collecting. Petabyte of data being collected from test fleet every day. Now every mile that is driven on this test fleet is not equal. You don’t want to be parsing through each and every mile. So now a data engineer actually has to go and curate through all of this data, figure out what parts of data are actually good for processing further, create a sense of ground truth, what is the perception supposed to be?


Nidhi:

Then you have machine learning scientists, AI data scientists, that have created these perception models. And you’re constantly running these diving scenarios that you have collected through these AI models. You’re constantly tweaking your AI model to actually have better accuracy. And once you have some sense of what your accuracy is before you can actually put in a real vehicle, you want to verify that… This is what Walt was just talking about this. You want to verify that with different types of models, whether it is a sensor based model, policy based model, or even like physics based simulations. So you’re constantly improving this pipeline before you build it, validate it and then eventually at some point, deploy it in your test fleet.


Nidhi:

Talk about a CICD pipeline at scale, right? You see how big this pipeline is, how constantly innovating this is, and the challenge of bringing compute across all of this data that may be geographically dispersed, that may be like petabytes of this and the compute that is required for this.


Nidhi:

The other thing that is interesting in all of this is that the whole autonomous vehicle development is very different from how cars have been manufactured today. Our cars manufacturing process today is very mechanical engineering driven. This is very much a software problem. So really the way our customers are thinking about this, I need a software stack, a software platform, that can be traceable, can have consistency, compliance and automation. So then I can actually have this huge pipeline that I’m constantly innovating on, constantly improving upon. And once you have this pipeline, you actually can overlay any scenario on top of that, whether it’s you’re driving, whether you’re doing test vehicle development, whether you’re doing edge constraints, really anything. But this software platform that you have, that you develop for your test vehicle and for your environment, are critical for the development of autonomous vehicles.


Nidhi:

And this is where I do think as the industry comes forward, Microsoft plays a critical role in this. Software being in our DNA, we actually do pride ourselves to be a software platform company. Our core mission is to actually enable new emerging use cases like these autonomous use cases to be built on the latest and greatest dev tools, machine learning algorithms and built on top of specialized infrastructure. The way we approach the market and the way we think autonomous vehicle market is going to go develop, our approach is to provide open source Microsoft AV platform. We are trying to provide an integrated tool chain that actually can help your dev ops to be able to provide this pipeline all the way for data engineering to machine learning operations to verification and simulation with partners like ANSYS. Having this integrated chain, tool chain, allows you to actually simulate end to end scenarios.


Nidhi:

We also like to keep it on an open ecosystem to the extent that we want our partners to be able to build differentiation on top of that. And talking about partners, and this is a great example of the collaboration we are doing with ANSYS, which is very focused on physics based simulations and in our opinion has a very comprehensive suite of validation, our approach is to actually go through all of these partners and not go around them because we really do think the promise of autonomous is huge, but the challenge of autonomous is huge too. And no single tech company would be able to solve this by themselves. So we are collaborating across a lot of companies like ANSYS, like Rescale, to bring these tool chains into the market because ultimately we all want to bring this capability into the market soon.


Nidhi:

Talking about ANSYS, I also wanted to just take a little bit of time to explain how a small portion of the workflow that is the validation can also drive such different infrastructure requirements. So we talked a little bit about what ANSYS does, which was the validation of these scenarios, and there’s really different approaches to it. Whether you are doing a sensor based validation, you’re doing a scenario based validation or you’re doing pure physics, HPC based validations. Now all three of these are required because they are required for completion and they have varying degrees of fidelity to them. But the hardware requirement underneath them are actually pretty interesting.


Nidhi:

So Walt mentioned that you know we have sensors today. Generally a level two plus car would have eight plus sensors. To validate that sensor model, you need 2200 CPUs and 200 GPUs that’s just for a level two plus car. When you go to a level four or level five you’re looking at a minimum of 30 sensors. So you’re linearly scaling the amount of compute you need and the mix of compute you would need to simulate that.


Nidhi:

Now contrast that with when you are in the second phase of this validation, which is where you are validating if the car made the right scenario planning, did it actually enforce all the right rules? That piece of the pipeline is actually very CPU intensive so it requires a lot of CPUs and then at the very end there is a little bit of rendering that is done by the GPUs. So now your infrastructure is very different. You want infrastructure that is the best HPC class CPUs with maybe an eight, maybe a half of a GPU to just render that scene. Very different type of infrastructure requirements there. And then if you’re going into the full blown HPC where you’re doing like CFT light models to validate how the VIN motion happens or how the actuators moved, you’re now looking into a full blown HPC cluster that has the RDMA backend.


Nidhi:

So hopefully in all of this we walked through, you realize that the challenges of autonomous will be solved where good collaboration from software side of things where a lot of software vendors come together, provide the tool chain for the Dell challenges, provide the tool chain for the AI challenges and provide the tool chains to manage the validation part of that and the future of the autonomous would actually be built on a vide variety of these infrastructure, a lot of them are called the specialized infrastructure, that Microsoft is also bringing to the market.


Nidhi:

I genuinely believe that with a combination of partners like ANSYS, partners like Rescale, will offer a software stack and a hardware offering that accelerates the development of autonomous vehicle. But I also do see the challenges it brings and the exciting stuff that it opens up for challenges in the algorithm development, continuous development in the hardware side of things and continuous improvements that are required in how quickly we can actually get to the 8.8 driven miles. So we really do not think we should get to 8.8 driven miles, but together we can actually drive a few million miles and be able to simulate a billion miles. And that’s how I do think we’ll get to our autonomous vision. Thank you.

Author

  • Walt Hearn

    As Vice President of Ansys Americas, Walt was responsible for the go-to-market strategy and sales performance for Ansys' channel, territory, strategic and enterprise segments in the Americas and Israel. Leading the largest and fastest growing region, Walt focused on developing our talent and next generation leaders emphasizing collaboration across all functional areas.

Similar Posts