HPC Industry Experts Panel – Discussing the Future of High Performance Computing
Speaker:
The moderator is going to be Edward Hsu, who is vice president of product for Rescale. So without further ado, Edward Hsu and the panel.
Edward:
A big pleasure this afternoon to moderate a fantastic panel. If any of you’ve done any light Googling on what HPC is, these industry Titans show up. So the first is Earl Joseph, CEO of Hyperion Research. Please welcome. Then Addison Snell, CEO of Intersect360. Woohoo! And Tim needs no introduction because you just spent 20 minutes with him.
Tim:
Right.
Edward:
So maybe we’ll just kick off Earl and Joseph with, maybe you could talk a little bit about your firms and what you’re focused on. Obviously HPC is part of it, but what’s your main thesis, what do you guys focus on?
Earl:
Sure. So what we use the term HPC for much broader definition, it’s really any place that you’re doing scientific research analysis using big compute, big data, and includes everything from quantum computing, AI. And we attempt to track everything sold on earth everywhere. So on a quarterly basis, we track all the on-prem services, I mean, systems, hardware, software.
Earl:
We also are right in the middle of right now of two studies, one is track every AI R&D effort on earth and every quantum computing effort. But what we do is lots of surveys. We try to be aware of what’s happening at a point in time. And last year, I think we were going to close out the year with about 23,000 to 24,000 separate surveys that we did. So we did very big data collection for that.
Edward:
So you’re the census for a HPC software.
Earl:
That’s who we came to be. Yes.
Edward:
Addison
Addison:
Hi, I’m Addison Snell with Intersect360 Research. I think Earl described pretty well what we do as industry analysts, so I don’t need to be redundant on that. Market sizing, forecasting, trend analysis, all the things that we do as analysts, we’ve been around as Intersect360 Research doing similar for high performance computing, and now also AI hyperscale as a distinct market from HPC. We followed trends like big data and AI to the extent that they’re progressive across these spaces, and we’ve been doing that for the last 13 years. So happy to be here. Thanks.
Edward:
Fantastic.
Earl:
Yeah
Edward:
A lot of brain power here on HPC, which is great.
Tim:
I do need to point out, I’m actually with Microsoft unless somebody made a trade that I didn’t know about.
Edward:
Oh, no way. We want a healthy mix of analyst thinking and-
Tim:
No, I think they’ve traded me for four young people; prospects coming later.
Addison:
I was out when you were introduced for the last one, and I saw it said Intel there, and I thought, “Oh my gosh, we do have to talk. I didn’t know that.”
Edward:
So maybe, Tim, you can kick this off for us.
Tim:
Sure.
Edward:
Many industry analysts have talked about 2019 as a pivotal year for HPC and the transformation to cloud.
Tim:
Yeah.
Edward:
What’s your vantage point on that? Do you agree with that, first of all? And what were some of the things that were inhibitors prior to 2019?
Tim:
I do. I don’t think we’re quite at the hockey stick in 2019, but I think it’s pretty simple. Addison and Earl will both agree that the vast preponderance of HPC, AI, whatever you want to call those shops, are doing something with cloud right now. And what people are understanding now is that yes, there’s getting workload up and running, but there’s a whole process that’s required in order to take it into production.
Tim:
And just the sheer number of people that we know that are in the process of taking it to production, maybe it happens this year, maybe it happens next year, but all of that seeding that’s been going on for the last three or four years, you’re going to start seeing that process finally bring to bear. And we just haven’t been there yet, but we’re here now.
Edward:
So is it a people thing, people are finally comfortable, or technology? What are some of the pivotal things that have changed?
Addison:
Well, first of all, I would actually say 2017 was a breakthrough year for cloud and HPC, and that’s where we saw it really come out in our budget surveys, that we’ve been looking at cloud for over a decade.
Edward:
So you call it two years about it, Hyperion guys.
Addison:
Well, we saw it come up in the budgets in 2017 into double-digit growth, 44% growth in 2017, and then stayed double-digit 2018. We’ll put out the 2019 number soon. So now we’re into that higher growth. Before that, yeah, everyone talks about the barriers of HPC and cloud, but really it was just a maturation of the business model more than everything else. It’s not just a matter of, “Hey, here’s an unlimited supercomputer. Go use it.” There’s a lot. It has to go along with that.
Addison:
And a couple of things happened starting a few years ago that really accelerated it. One of the most important was the maturation of licensing models, particularly in engineering-driven areas, where I could take my Fluent license or whatever. They had utility licensing, but it got more cloud-friendly over time.
Addison:
And the other thing that matured is really the cloud-managed services space with solution providers like Rescale, but also cohorts like say Nimbix or our system, Cycle Computing, which now part of Microsoft, UberCloud in Europe, having more of an ecosystem that can help organizations balance workloads both on-premise and in the cloud, because the majority are hybrid clouds.
Edward:
Right.
Earl:
So as you know, we just declared a six weeks, I guess it was, that this is a tipping point for cloud computing, that’s meaning people reach in their pocket to spend into the cloud for HPC jobs. In all areas, we have to maintain five-year forecast like Addison does, and we botched it quite frankly, and that’s why we made quite a big announcement about it, because literally we have to increase the numbers for 2019 by a billion dollars. So there was actually an additional billion dollars that was spent in the cloud.
Addison:
Should have got it back in 2017 you would have been alright.
Earl:
But we’ve been tracking clouds now for, back when they were also called grids, for about 15 years. And we had pretty good healthy growth, but you’re absolutely right, we missed that forecast.
Edward:
Forecasting cloud coverage is hard.
Earl:
Yeah, I think it is. And the two biggest reasons, because what we do is when we miss a forecast, or actually before we create the new one after realizing something’s wrong with the old one, we go and do tons and tons of surveys, and we really found two aspects. One is, all the cloud providers created an infrastructure that was dramatically better for HPC. Whether it was better hardware accelerators, the software environments, as Allison mentioned the licensing models, all those pieces were in place, but the second one is it made it easier to use. It’s all about ease of access. And as Tim was mentioning, is this 500 people or 5,000 people able to do anything? They’re moving it to be 50,000 to 500,000. We’re in that phase right now. And that ease of use is just crucial there.
Edward:
Got it. So Bill from Intel earlier this morning talked about the stages of grief. I guess it was denial, anger, depression is in there somewhere, finally acceptance. Oh, bargaining is there in the middle. So for the folks you chat with and certainly the across the industry, where do you feel most people are in that stage? If cloud is this transition, are most people in the earlier stage, or later stage, or somewhere in between?
Earl:
Do you want me to start?
Edward:
Yeah, sure.
Earl:
Okay. So we do a work log within buyers and help them in their strategic planning sessions in that. Yeah, and I have to tell you there’s people in different categories. There are people in 100% denial right now. Some people would call them dinosaurs, but there’s other terms for it, but there’s people that have very specific reasons. First of all, if they’ve written their codes and their jobs in-house, they’ve optimized everything in their center, bringing on-prem for them is just going to be better than the cloud for quite a while. I mean, because they’ve optimized everything on earth for that.
Earl:
Then there’s a whole another category that’s already is using clouds very heavily. And we find that there’s about 25% of the market in each of those camps. Roughly half of the market is in the middle. So I would not call it denial. Maybe bargaining might be the right category for that middle camp.
Edward:
Bargaining? Got it. So bargaining is your call. And what is yours?
Addison:
To the extent that Bill talks about denial or anger in people’s responses over the last 10 years of cloud, I think the problem has been that a lot of the cloud messaging, first of all has treated all potential users as the same, and cloud is a fungible commodity. And second of all, that the business proposition sounded a lot like, “Do you have a few minutes for me to tell you the good news about cloud computing?” Like it’s this inherently desirable state we should want to achieve, and if you don’t to come with me, then you’re being left behind as a Luddite.
Addison:
Most businesses are more practical than that. They would like to know what’s economically beneficial or beneficial to their workflow about moving to cloud. And for most of the last 10 years, it just wasn’t cheaper to shut down what HPC had been doing in-house and move it out to the cloud. Now, we’re really reaching a point where these hybrid clouds can be quite efficient. But I would still really come down to the notion of segments, and there’s two different ways to look at that.
Addison:
One is by application domain, because an engineering-driven code that’s largely ISV code is going to be very different than say something in biosciences where you deal with a lot of open source, or something and finance where you deal with a lot of in-house applications and algorithms. That’s going to be three very different approaches to scalability, licensing, data sovereignty, all of these major issues.
Addison:
The second area is segmentation. I liked that Bill cited our study with NCMS from several years ago, looking at small manufacturers. It’s very different taking someone who’s never done HPC and moving them into cloud versus taking someone who has HBC experience and trying to migrate them into cloud. And you want to be very careful, in either case, not to offend them by implying that they’ve been doing it wrong. It’s just not a good way to start the conversation.
Edward:
No, it’s not.
Tim:
He’s quite a salesman. The other good thing that’s happening now is that I don’t think that we need to convince anybody of anything. There are enough folks out there who have important work to do, or as we like to say, “People with a problem and a deadline.” That it’s just a question of finding, “Who are the folks with a problem and a deadline?” and then we just work toward, “What’s the right answer for that particular workflow?” And that tends to be a very different conversation than the person who says, “Okay. We’re trying to decide what our cloud strategy is going to be.”
Tim:
And deciding what one’s cloud strategy is going to be starts in denial. That’s the stages of grief when you go through that process with a customer. It’s just time. As I alluded to before, people have done some remarkable work, and worked really hard to get systems up and stable, and their schedulers working and the user community. Everybody was finally happy, and people were able to like go home on Friday and not come back to work again until Monday, and then their boss says, “We’ve got to go to cloud.” I would dig in my heels, too.
Tim:
And so it’s just a question of we just have to, as Addison says, we’ve got to work at people’s pace, understanding that the pace of change both societally and commercially is accelerating on its own. We don’t need to do it.
Edward:
Alison, you mentioned there’s different groups, ISV users, open source users. Amongst these groups, are there certain that are moving to cloud faster or slower that you can talk about?
Addison:
Well, yeah. The thing that has been slowest moving to cloud has been the ISV licensed applications, because it brings up a problem, not only in terms of how does the licensing mechanism work, but then a lot of times when someone says, “This is going to be more efficient for you to run in cloud,” that’s code for cheaper. And what you really want is, I want to pay less for software. Now I’m going to the ISV and saying, “What I want you to do is work extra hard to come up with a new licensing mechanism, so that you make less money.” And that’s not always, again, the best sales pitch to give to the ISV.
Addison:
Yeah, you want people in cloud, but that should be accessing more capability, in the end getting more work done, not less. And I think it just took a few hiccups getting through that, because it really implies a very different business model for that software provider.
Edward:
It’s interesting. 10 years plus back when virtualization first hit the scene, everybody thought, “We’d be buying less servers,” and all the server sellers hated virtualization, and I heard a lot of that working at VMware. But once people virtualized, people found that people were actually using that opportunity to buy even more servers, maybe more memory optimized and so forth. So what’s been the observation for when people go to cloud, are they in fact spending less? Are the ISVs actually losing, or are we actually finding that people are doing more and therefore either spending the same or more?
Earl:
So normally what we see is when people go to the cloud, they’re actually spending more. In fact, in all our numbers, we’ve seen no slow down on on-prem purchases, but the spending in the cloud is dramatically higher. One issue that we’ve been studying a lot for many years, and we call it pent up demand. So if you have an HPC center doing research and things like that, every analyst, every scientist wants to do 10 times as much. I mean, the amount of workload they could be processing is dramatically higher.
Earl:
If you compare it to regular enterprise computing, once you process everyone’s payroll or every transaction, you don’t need another server. You’ve filled that. So in HPC, there’s this tremendous pent up demand. So it actually fuels the growth fire. And that’s the one chart that Tim was showing at the tipping point. X86 brought the cost to doing HPC down dramatically in Linux clusters. And what happened? The market just exploded in growth.
Earl:
We’re right now looking at and trying to figure it out because we have to do these five-year forecast, is that combination of AI along with big compute is going to ignite that second growth or that next growth wave. And our answer is yes, we believe it will, we just don’t know when.
Edward:
Okay. Do you share that observation as well?
Tim:
Oh, 100%. I mean, if you go back and look at that, the other markets grew as well. So, it grew because giving people access to the ability to solve bigger problems is good for everybody. How’s that ever a bad thing? And so what’s exciting about where we are at this moment in time is that that’s what we’re in terms of giving people access to it. But to Addison’s point, we made the mistake of telling people it was going to be too easy. And so, all we need to do is take a step back.
Tim:
But this is where the workflow comes in. This is where people have to change their thinking… sorry if I’m a broken record on this, but I’ve been a broken record for 20 years… what you need to do is take a step back and look at the workflow, because we used to say, “I’ve got $3 million.” Now, if we really have reached this point where we’re able to do the science and deliver more value for less than it cost us to do it, then let’s just figure out what we can go do, because the value is already built in as opposed to putting an artificial number on it.
Tim:
And so, we need to just turn the whole process around in terms of how we evaluate whether it’s less expensive or not. Same thing with how do we evaluate whether an electric car is really more efficient. I mean, everything gets turned on its end when we start looking at things in the light of collaboration plus compute.
Edward:
I’ve been in the industry 20 years. I’m always surprised to be reminded, but I shouldn’t be, that between the business process and the technology, that the slowest one to change is always the people, the processes, it’s not the technology.
Tim:
Okay.
Edward:
So let’s shift to the organizational sense of purpose. Early on when virtualization first hit enterprise computing, I had many conversations with either clients as a consultant or a product person at VMware, they were very uncomfortable because I hugged those servers and those servers are mine. And if you put a virtualization layer, workloads are moving around. That’s highly uncomfortable. And then once they finally got used to it, a software-defined approach to hyperscale computing or enterprise computing, then it’s okay. Well, now actually is all going to cloud.
Edward:
And so, when you are talking to either your clients or peers in the industry, do the HPC organizations you work with, do they see themselves as people providing a facility of computing, or do they see themselves as a strategic partner providing compute services regardless on where it came from?
Addison:
So again, I would segment that. If you talk to a national lab or a lot of academic research centers, part of their charter is to provide a center of research. And in a sense, cloud threatens that. Because if I can go to a cloud for that instead of my academic research center, well then, what do I need the academic supercomputer for?
Addison:
But in commercial markets, which are actually more than half of HPC spending and have been the bigger part of the growth engine overall, their sense of purpose really has more to do with what they do as an organization. And this is why we see HPC as an enduring market going forward. And this could be Toyota buys a lot of computers for a lot of different things, but the ones that they use for designing cars, trucks, and minivans are HPC. And you could say the same thing about Pfizer and curing disease, or the CIA and spying on people, or British Petroleum and finding oil. It doesn’t matter whatever the core thing the organization does is, that’s where you see HPC deployed.
Addison:
And with the scientific and engineering applications, or R&D efforts in general, we don’t get to a point where the problem is de facto solved and we can all go home. Like in theory in the future, we can have the last scientist at the last chalkboard writing the last equation and people cheering because we’re finishing science, but in general, science isn’t like that.
Addison:
I love the precision medicine talk that we had with precision genomics with Mark Oldakowski of Bionano Genomics. It was making me remember the year 2000, the Human Genome Project, we mapped the human genome. This is one of those scientific moments that sounds like the end of something. “Oh, you’ve mapped it. Great. Are we done? Can we go?”
Edward:
We’re all done.
Addison:
Well, no, you just invented Genomics. And here we are 20 years later, and he’s standing, and here’s all these this extra work there is to do. And that’s what it looks like in an established field. In something new, it still comes down to what the core thing is for that organization. That NCMS study that Bill was referencing when we look at small manufacturers, if you have one qualifying question to try to determine whether someone’s a likely adopter of HPC for the first time or not, the one question is always, “Who’s the quality leader in your segment?” If they say, “It’s us,” you are in.
Addison:
And there are a lot of other reasonable ways for a manufacturer to try to differentiate other than being the high quality in their segment. But the ones who say, “I will invest to stay ahead of my competition on quality,” that’s your best tip of the arrow for finding new HPC investment because that’s what they do.
Edward:
Any other thoughts?
Tim:
No. It’s a great point.
Earl:
Sure. So we do a lot of segmentation of the different categories, where cloud computing makes sense, where it doesn’t make sense. And like Addison, it is really tied to the core mission of the organization with the core of what they have. So in some cases, there are natural places where your security is just off the chart, if you’re in the military or something like that, so you may do it differently.
Earl:
But another thing that’s going on is, there are some technology issues right now in the world of computing, especially when you get into big compute, big data, and everything, and that is moving the data around is really costly in both dollars, electricity, and everything else. So you tend to want to do the compute more towards that data’s residing.
Earl:
So it’s residing up in the cloud, hey, using cloud computing has a fundamental advantage that I think just will never be stopped. If most of your data is on-prem, sitting on-prem, to move it up in the cloud, bring it back down, has an additional overhead. And we work with a lot of customers, that overhead is extremely costly. So a lot of this actually turns into a cost discussion. That’s why I was saying bargaining was my vote right here is because right now cloud computing is not cheaper for most HPC. It is in a few cases. In some cases, it’s as much as 10 times more costly than on-prem. So those are the trends that we’re watching and trying to map through.
Edward:
Got it. Speaking of bargaining, is hybrid cloud bargaining, or do you see that as a long-term steady state towards a combination?
Addison:
Long term steady?
Earl:
Yes, absolutely. I agree.
Tim:
This is going to sound like a shameless plug for our host, but I guarantee you there’s no Rescale customer that’s paying 10 times more than their on-prem costs in order to do what they need to do. And a big part of that is, is the people who are spending more than they should, don’t have a partner with whom they’re working to be able to streamline that process and find all of the efficiencies. It’s not 10 times more expensive because it’s 10 times more expensive, it’s because their implementation is 10 times more expensive.
Tim:
And so it really points out the need in what’s being created, where we have today is a whole new strata of partner in this space to provide to customers, both commercial and increasingly more public sector, the ability to help them manage this cloud transition that people just didn’t need before when they were deploying stuff on-prem, because that’s why you had grad assistants in Domino’s on speed dial because people wouldn’t rack and stack them.
Edward:
Yeah. Simplifying the transitions is super key, not just to cloud but also to multicloud, which we’ve heard a lot about today.
Tim:
Yeah.
Earl:
Mm-hmm (affirmative).
Edward:
We did a high level analysis on just the customers we have at Rescale and were surprised to find that… actually, take a guess. Here’s the test.
Earl:
Sure.
Edward:
What fraction of Rescale users use two or more cloud providers in terms of the jobs-
Earl:
More than 50%.
Addison:
I would have said most, but I don’t know.
Edward:
Tim?
Tim:
I’m going to make this more complicated than it needs to be. I was going to answer a question with a question, but I’m going to go with the cop out 50/50.
Edward:
So, you’re right. Yeah, it’s actually exactly. So if you run 25 more jobs on Rescale, odds are 50% that you’ve run on two or more clouds. So if I had a present, I’d give it to you.
Tim:
That was dumb luck.
Edward:
So maybe we shift from the organizational sense of purpose to the individual sense of purpose. I’m going to go on a short segue here. There was a time whenever you saw a scientist in a movie, you knew he was going to die. Do you guys agree with that? So there’s a movie called Universal Soldier. Remember that?
Earl:
Yes.
Edward:
I saw all these scientists and all these soldiers. Right at the beginning of the movie, I was like, “Man, all you scientists are going to die.” And that was totally right. So there was a point in time, I remember when Iron Man-
Addison:
Are these doors locked?
Edward:
It’s not a Hollywood film here. I remember when Iron Man first came out, a news article came and said, “Oh. For the first time, Hollywood is portraying it’s cool to be smart and be a scientist or engineer.” And I think in the last couple of years, we’ve seen a lot of attention in terms of spotlight being put on the social, local, mobile kind of conductivity driven innovation stuff that you always talked about getting the spotlight now. Now it’s a little bit different because of privacy and all that stuff. When is HPC or big compute’s Tony Stark moment? Is that here already, is it coming soon, or it’s already here, we just don’t even notice it because these guys are less flashy?
Earl:
I think it’s coming very soon. Well, I’ll tell you a little story. So we were asked by NPR to do some longer term forecast on what big data could do. And one thing we came up with is, what high school students we would be giving as homework. And we had, again, made the forecast, and this was 20 years, five years from now, now we think it will happen in the next five years. They will be asked to do a PhD level of equivalent for one night’s homework. And that requires the big compute, big data, AI, machine learning, deep learning, and some of the other piece that you showed, cost reduction, because it has to be cheap enough for a student to do it.
Earl:
But what you could do is pose a question, and then have the computer itself through the AI machine learning doing the entire check of all history, of all research ever done on that question, test a new concept, and come back with the findings for it; literally, the PhD level of exploration in one night for a high school student. I think that redefines our culture and the intelligence capability.
Edward:
Make science more approachable.
Earl:
Yeah. But also, mention what our scientists are going to be like when they graduate if they’ve done the equivalent of 100 or 200 PhDs before they actually graduate.
Edward:
Yeah.
Addison:
I think it happens all the time and you just don’t realize it. I mean, there are lives saved in car crashes because of the simulations done on airbags and crumple zones, and you don’t know what the Delta is who would have survived and who wouldn’t have, but it doesn’t have to be a car crash in the world’s largest supercomputer. Before he retired from Procter & Gamble, I talked to Tom Lange, who was the poster child for the use of high performance computing in consumer product manufacturing, and I asked him, was there ever any product P&G, which has been around a long time, wanted to come out with, but until he had HPC and simulation they couldn’t do it? And I was impressed that he had one like that. You’re going to laugh of what it was. Tide Pods.
Edward:
Really?
Addison:
He said, “It’s not like we only just thought of that. We wanted that product for the last 75 years.” But other competitors had come out with ones that were compressed powders, and people hated them. They work in the dishwasher where the water hot, but in the laundry where it’s cold, they don’t dissolve all the way, and you get the grit in your cloth. People know what I’m talking about. Right? You get the grit in your clothes, and you hate it.
Addison:
So they knew they wanted it to be a liquid. Well, P&G has liquid fabric softener and liquid detergent, that’s not a problem. What’s the hard thing to design? It’s the little envelope. The little jacket is hard to design because you need to design a plastic that dissolves immediately and completely in cold water but does not get dissolved by the liquid inside the little jacket, and it has to be shelf-stable for a long time at a wide range of temperatures and humidity, is shippable.
Addison:
You think you’ve got that? Now you’ve got to get the product inside the jacket on an assembly line at about 35 miles an hour and into the box. And if you think that works, okay, the next hurdle is it can’t cost more than about 25 cents a pod because it’s laundry and people won’t buy it if it costs more than that. It’s actually a big problem.
Edward:
Wow! Yeah. I’ve never realized.
Addison:
And it’s Tide Pods. And then you buy it, and my wife says, “Oh, I thought of this years ago.” But you don’t think of what’s the engineering that goes into a laundry detergent pod? And stuff like that happens all the time. And I know I’m talking a lot, but your big compute, I’ll go the other direction, I think the other one I really look forward to that’s on the horizon, we talked about human genome, and that takes decades. I think the next Human Genome Project moment is we’re probably now about 10 years away from getting a complete simulation of a human brain.
Addison:
And if we can do a complete simulation of a human brain, start thinking about the avenues of research in neuropathology that you can start to simulate, from Alzheimer’s to autism, concussion, stutters. These are hard physical experiments to run. If we can simulate that, think of what we can have after that.
Edward:
Why are some people so mean, for example.
Addison:
Yeah. Sure.
Edward:
Awesome. Tim?
Tim:
I don’t have anything as cool as Tide Pods, but then I was trying remember-
Addison:
Who does? They’re Tide Pods.
Tim:
Right.
Edward:
I didn’t even know Tide Pod was cool till today.
Tim:
Who thought somebody was going to eat them, too, right? I couldn’t design for that one. I think that the next big one is actually weather. I’ve been very fortunate to spend a lot of time at NOAA, and Neil Jacobs who is the administrator there now, when we talked before about science getting cool, he just spoke at South by Southwest. This is the guy running NOAA. Talk about… that’s the corner of nerdom.
Tim:
And here he is speaking at South by Southwest because what he’s doing is going to make a difference. And they have embraced cloud not because it’s cheaper, but their FE3 code, which is the next generation of the numerical weather prediction model that they’re going to use is actually cloud-ready from day one. And the reason he wants that is because he wants to be able to collaborate all around.
Tim:
And then I’ll just tie that back to the reason that I’m hopeful for all of this, that it might not be the Iron Man moment, but I was just at William & Mary this weekend, and they had a bunch of undergrad students. We did a Shark Tank type thing because these are non-computer science majors. These are social scientists who were submitting papers, and we sort of had a panel, and there was money that would go toward small funding stipends for these. And all of them were incorporating climate models and the impacts of climate change on a refugee displacement, violence, cholera, and all these other pieces.
Tim:
And so to me, as someone who gets immersed in these kinds of conversations, to be sitting at a 300-year-old liberal arts institution where they were talking about very significant uses of this technology in compute because they could, was awesome. And that’s why I left there thinking, “We’re going to be okay.”
Edward:
Awesome. I think there’s no doubt that big compute, HPC, really helped us solve some of humanity’s biggest problems. The challenge becomes, we probably all know more aerospace engineers that became bankers or counting ad impressions for a social media company than a banker or a social media person becoming an aerospace engineer. So is there a brain drain, and how do you see that shifting, or will it not? Will making big compute easier make supercomputing or big compute sexier, and make people want to go there more than go count ad impressions at a social media company?
Earl:
I think it’s an important trend because there’s an additional trend that’s affecting that. Most of the people in HPC who’ve had their hands on programming, getting used to optimizing everything, it’s an aging population, are retiring out, because there hasn’t been that influx coming in. Without that influx, there will be a massive shortage. And we’re already seeing the shortage of programming parallel programmers and various other functions. So I think the ease of use is crucial. And the second part you mentioned was making it exciting for people. Because right now it is not really an exciting world to go into.
Earl:
And so, a lot of the folks that we work with, because we try to work with different educational institutions about how they can make it more attractive. And I can tell you right now, if it’s related to parallel programming, if we’re writing games, and that’s an easy one to sell.
Edward:
All right. Gaming is an approach.
Earl:
Gaming is an approach. Social media, too; the social media aspects of the whole thing. And now with AI, and I know you know certain things are buzzwords in that, but to get the excitement you have to have buzzwords. And I think the AI and some of the examples. And I’m still thankful to Nvidia because they’ve come up with just so many examples like training computers how to play golf well. That’s an example of deep learning. And also all the autonomous vehicle thing. So Nvidia really has created that eye candy that it gets exciting. I would say it’s probably one of the biggest weaknesses of the industry right now.
Edward:
Tim, you guys are working a lot on just getting academia and younger folks excited or engaged using HPC or big compute in the cloud. What are some of the strategies that you are finding working?
Tim:
Well, I’d mentioned before that people who were in HPC and been here for long, everybody always has a reason that they’re in it. That there’s a problem that they hope can get solved, and we usually don’t tell each other what that is because it’s something deeply personal, whatever it is. And so we’ve all carried around that sense of mission.
Tim:
Because I’ve had the good fortune to be exposed to a lot of undergrads and recent grads, that generation has a sense of purpose. They have a sense of duty. And I feel like our job is just to make sure they have the tools to fulfill it. And that’s why I was down in Williamsburg this weekend is that I’m not sure that I can necessarily… well, they’re not going to listen to me, but I’m not sure that there’s anything that I’m going to do coming from this that’s going to make it exciting for them. But if what I can do is remove any barriers to them being able to go just try, that’s what we can do.
Edward:
Right.
Addison:
Yeah. There’s unquestionably a brain drain, if you want to call it that, of hyperscale company. I live in Mountain View. I see how many people Google is hiring, Facebook, and I know how many people are around. They are taking up a lot of talent. To call it a brain drain implies that you’re losing that creativity. You’re certainly focusing them on something different, but there’s a lot of AI research that’s getting driven by the hyperscale companies.
Addison:
Now, you might say, “Well, I’d rather use that for weather simulation than for driving targeted advertisement,” but that’s a matter of perspective. This is where the company gets money from, and they are paying the people to do it.
Addison:
I think a better thing would be to start looking at what innovation can start migrating from hyperscale companies over into high performance computing. I think one of the more interesting ones that could happen is going to be in scalable computer languages.
Addison:
HPC is still roughly 25% C, roughly 25% C++, roughly 25% Fortran, and 25% everything else. And what changes over the years is the constitution of the everything-else bucket, with things like Python or Java before that. Things like Chapel never really got going. But what about a language like Julia are more like Go?
Addison:
Go is a very scalable language. People love programming. Just because they built it independently without listening to the boomers who said, “Back in my day, this is how we did it,” doesn’t mean that you couldn’t pull that language over and do something cool with it in HBC. I think we might start seeing things like that, or middleware would be another area where we can see some innovation.
Addison:
Last thing I’ll say is, I would again apply the notion of segments. Not all young people are the same, and they get driven by different things. There are people who are excited about science and climate change, and want to have that be their stamp in the world. And it doesn’t matter who’s paying them the most or what their social media app is, they’ve got a passion, they want to go do it.
Earl:
If I can answer it also, I think we need to address the of diversity, so we get a larger portion of the population in, and also our immigration approaches. And that is, let’s try to be the world’s collection of all the HPC and scalable computing people, and make that much easier.
Edward:
It has been said that for connectivity driven innovations, it’s easier to create a Facebook than the next rocket company. And therefore, for interest of national security or just strategic industries, the government should be putting more money into… There’s only so many Elon Musks to go around. Right? What’s your view on that?
Earl:
I’m a big proponent of the government investing in high technology. I’m very glad to see that the government has invested a lot in quantum computing, AI now there’s two new initiatives, Department of Energy has created a whole AI department now and just announced the head of that department two days ago. I would like to see it to be more competitive the rest of the world and that by my hidden terminology, it needs to be about fivefold of what we’re investing right now. So it’s very short. But I’m very much a proponent of the government investing in longer term things.
Earl:
I am a capitalist 100% because I believe in industry. Like what Addison said, you want your industry to chase the money, what people want. So you want to let industry loose to do that in the most efficient, effective way they can, and then government funds the longer range R&D would be perfection model in my mind.
Addison:
Yeah. Government investment in long range science, definitely in the national labs, that’s still extremely valuable. Industry drives forward innovation pretty well. I would say that the deck is still stacked against small companies a little more than large companies, that’s for sure. And I would say the longer range thing I would look out for in terms of the hyperscale companies, and we’ve written this into our vendor profiles of some of them, is it doesn’t escape our notice that it is rare in human history that we’ve seen so much power and influence concentrated into so few companies. I don’t think it’s really happened at this scale since the late industrial revolution. Historically, it does not last.
Addison:
And that is not a statement on the values of those companies or a real microeconomic outlook as to their profitability, more of a socioeconomic or political economic look at the concentration of power into so few companies. It’s not stable.
Edward:
Tim?
Addison:
Oh, sorry. You heard that.
Tim:
Well, is happy hour yet?
Edward:
It’ll be very soon.
Tim:
No. But on the flip side, because I’m a small company person, so we were acquired by Microsoft for about two-and-a-half years. So Microsoft has Microsoft Research. Microsoft has thousands of researchers who that doesn’t have a P&L. They’re able to just go solve hard problems, often in ways that someone in a university or a government agency wouldn’t be able to. The billion-dollar investment into OpenAI.
Tim:
I mean, those are the kinds of things they feel good being part of it. Because I read the articles, and I don’t disagree with what Addison’s saying. I think we’re all still figuring this out, but it feels really good to be at a place when you see the good things that we do, because we’re sharing the good fortune that we’ve had.
Edward:
Fantastic.
Addison:
I mean, a large supercomputer costs on the order of hundreds of millions of dollars. In 2018, we counted 11 hyperscale organizations that spent over $1 billion on IT infrastructure that year alone. Google became the first to spend over 10 billion, and it wasn’t close. They got over the bar easily. I think there are probably two that spent over 10 billion last year, and maybe three, we haven’t finished counting yet. You’re talking about 100X order of magnitude between a large supercomputer and what one hyperscale company is spending in one year. That’s the level of difference I’m talking about.
Earl:
Yeah.
Edward:
I think we have time for one more question. So maybe we’ll start with you, Tim, on this. We talked about a lot of things. What is the one thing that you are personally most excited about in terms of what they can bring. And if it’s Tide Pods 2.0, that’s totally-
Tim:
It’s just that there’s so much that we don’t know. There are so many places to go. I remember years ago we, we stood up the first top 500 system on the continent of Africa. It was in South Africa, Happy Sithole in South Africa. And that was a Herculean effort to do that. And that felt great to go do that. But now, there’s no one on the planet with an idea that can’t do something about it if they don’t have an Internet connection. But that to me is being part of something that is part of this next revolution. And it’s not a widget. It’s about bringing all of that together. But we’re going to see it in our lifetime.
Edward:
Addison?
Addison:
Well, I already talked about the brain science idea. I remain excited about that one, so I’ll pull it back to some of the other topics. We’ve talked a lot about AI. I don’t think this is a question of AI versus HPC. I think AI is a new technique that HPC has available to it. And we’re going to start seeing AI augmented HPC.
Addison:
A good example of that is, we’ve been talking about engineer and the loop simulations for what? Decades. Right? And we don’t see them because the human latency is pretty high on that. But what about AI in the loop? If we can teach AI how to play Go or poker, we can probably teach it the rules of a different game like optimize the airplane wing, and we give it the boundary conditions and rules of the game, and here’s a simulation, and go play. Don’t get rid of the engineer, the engineer still has to be there, but you’re essentially using the AI for exploration and target reduction. You can’t give it imagination, but you can give it a solution space and let it go.
Addison:
We saw a lot of examples about mesh refinement. This is another idea that’s been around within clusters and shared memory systems for decades. But two things about that. One is, you could do it predictably. And another is, a big thing machine learning has given us is a great discussion around precision. How often are we doing very exact 64-bit calculation on a model that wasn’t that good to begin with?
Addison:
So if I can refine the mesh around where the calculations are most intense, can I also on the fly reduce the precision in the areas of the mess that are less dense? Can I do adaptive precision computing on my HPC system and save a lot of those cycles?
Addison:
Final one that I’ll talk about with prediction, we see a lot in storage going on with tiering. It really matters where the data is right now. And when we talked about hierarchical storage management or information life cycle management, any type of tiered storage over the last 20 years, you had two tiers, you had disc and you had tape. Those were your tiers. But now we’ve got NVMe and SSDs that are on the node, then we get burst buffers mixed in with the disc. I’ve got a warm archive, I’ve still got a cold archive, and I’ve got the data that’s in cloud. My data is in five, six, seven tiers.
Addison:
We’re going to get to this decade, high performance storage, especially for commercial markets. It’s not going to be about having a high bandwidth parallel file system to one fat tier. We’re moving beyond that. It’s about having the data in the right tier at the right time. This is another good potential use for AI. If I can monitor usage patterns and try to promote data before you ask for it and get toward predictive storage, hyperscale companies work on this already, national labs have projects here, I think that’s going to be a big deal.
Edward:
It’s fascinating. Earl, what are you most excited about?
Earl:
So the part I’m most excited about, if I look back at the last 10 years as far as the new products, the social media things, what the clouds have brought to us and all the science, it’s phenomenal. You look at the curves that Tim was showing with that, we’re right now in a phase where everything’s accelerating.
Earl:
So I’m so excited about what we’re going to see in the next five to 10 years, as far as new types of products, safety, our society in that. And that’s going to heavily be driven because in my view, the big compute at a low price, which comes from the clouds and other places, along with AI machine learning and massive dataset, means every engineer, every scientist, every analyst is going to just be turbocharged. So whatever they accomplished in the last 10 years, I’m expecting they maybe will do 1,000 times that. So if you think of our entire engineering scientific population, all at once having these super brains because they have this access to everything else, that’s what makes me excited.
Edward:
Do you think fusion will stop being constantly 50 years away?
Addison:
Future will always be out there.
Edward:
Right.
Earl:
I think we’re going to see a lot of things that we thought were going to take 10 years, 20, 30 years happen in a fraction of that time. I’m not so much certain about fusion yet, but I’m just saying I think a lot of things are going to be accelerated forwards. Yes.
Edward:
Awesome. So Earl, Addison, Tim, thank you very much.
Earl:
Sure. Thank you.
Addison:
Thanks, Edward.
Tim:
Thanks everyone.