Supercomputing: Where We’ve Been and Where We’re Headed

Since the first supercomputer was invented in 1964, high performance computing evolved through a number of incredible breakthroughs, all the way up to the Fugaku supercomputer in Japan — a 7.3-million core behemoth that made its debut as the top supercomputer in June 2020.

From a single CPU in 1964 to 150,000 CPUs plus GPUs in 2020, to say the trajectory for supercomputing is bright would be an understatement. And with organizations shifting engineering practices from physical testing towards simulation to reduce overall R&D cost and time, the demand for greater specialized hardware and software is growing along with the need for fast and available compute.

To meet this new demand, companies like Rescale, AWS, Microsoft Azure, and Google Cloud are pioneering a modernized supercomputing experience by leveraging the cloud, which comes with its benefits, as well as its challenges. While cloud supercomputing has become popularized the past few years, companies like Rescale now offer intelligent full-stack automation for big compute and R&D collaboration on hybrid cloud, aiming to optimize for cost, time, and cloud service. While the days of on-premise computing are not gone, there is no denying the need and desire for digital transformation in the world of high performance computing, and it will be interesting to see just how the future will write the next block of history.

But we don’t need to tell you about where we’ve been. We’d rather show you.

Author

  • Jolie Hales

    Jolie Hales is an award-winning filmmaker and host of the Big Compute Podcast. She is a former Disney Ambassador and on-camera spokesperson for the Walt Disney Company, and can often be found performing as an actor, singer, or emcee on stage or in front of her toddler. She currently works as Head of Communications at Rescale.

Similar Posts