Deep Learning: The Next Big Challenge for High Performance Computing

We are amidst what experts are calling the Fourth Industrial Revolution — Artificial intelligence has completely flipped the technology world on its axis, with more to come. Among the many subfields of artificial intelligence is deep learning, which is essentially what enables the most sentient type of artificial intelligence — or in other words, the type of self-aware AI that can pass the Turing Test.

So why hasn’t deep learning reached its full potential?

For one, deep learning is very computationally intensive.  Compute power is often limited, preventing deep learning algorithms from going full force.   

Fortunately, there’s a promising solution in high performance computing (HPC), which generally refers to the practice of aggregating computing power in a way that delivers much higher performance. HPC allows enterprises to allocate the computation capacity required to create powerful and useful deep learning algorithms with a high volume of data, and as deep learning advances, HPC will continue to play an expanded role.

Beyond traditional HPC, the last decade has seen advancements in HPC done in the cloud.  By utilizing cloud HPC, scientists and engineers can tap into compute capacity far beyond what is physically available in-house.  Cloud HPC usage is growing, with the 2021 State of Cloud HPC Report showing that 73% of HPC managers believe most or all of their HPC workloads will be in the cloud within the next five years.  

Today, cloud HPC is being used to analyze very large data sets, enable high level computational simulations, and facilitate very large research projects. Analysts predict we may be at a tipping point that will drive usage in cloud HPC even further, and deep learning will play a part.

Because deep learning is becoming far more ubiquitous in business these days – with use cases ranging from medical research, commodities trading, robotics, and even self-driving cars –  there has never been a greater need for a convergence in both deep learning and cloud HPC.  Data intensity associated with deep learning use cases is directly parallel to the amount of physical servers and powerful processors required to accommodate this data.

HPC Simulations for Enterprise AI

When it comes to building the enterprise implementations of deep learning algorithms, there is a need to run machine simulations to test these implementations without having to create anything in the beginning. This is true especially for simulations that are very data intensive. 

A good example of this is deep learning used in gene sequencing.  By taking advantage of HPC, the amount of time for gene sequencing simulations and even the number of simulations required altogether are greatly reduced.

This can also be very beneficial to organizations like GPS navigation software company, Waze, that strives to create advanced autonomous vehicle simulations with driving data. And in the case of a future filled with self-driving cars, conducting numerous and significantly faster iterations of simulations is absolutely critical to successful product launches, because there’s no better way to lose the public’s initial confidence in self-driving technology than launching a fleet of self-driving vehicles only to see them crash shortly thereafter.

The Synergy of Deep Learning and HPC

It should come as no surprise that deep learning and HPC have strong synergy. Because HPC simulation is so effective for data-intensive workloads, there have been significant strides in cosmology, physics, astrophysics, materials science, and even genetic data. With all of these use cases, there has been a profound experimental component to the deep learning implementations used. 

Currently, this synergy doesn’t come without its limitations. Distributed training algorithms for HPC platforms are currently standardized with neural network models and datasets such as the ResNet model and ImageNet datasets. This can be limiting when seeking insights of actual performance in domain-specific AI hardware architectures. 

With this in mind, the potential of AI with HPC can be maximized through:

  • Deployment of AI models and data on open source platforms  — This will only work to its benefit because, as with most open source implementations, there will be a more rapid adoption of necessary tools. 
  • Development of arithmetic frameworks for more informed choices in AI hardware architectures 

Companies like IBM, Lenovo, and HP have attempted to create such implementations to get the synergy of AI and HPC to be available to their suite of enterprise products, which will allow their customers to better capture and process incredible amounts of data for their enterprise workloads as they further move their operations to the cloud. 

Enabling Deep Learning-Ready Hardware

The viability of deep learning is its capability of processing incredible amounts of information. But in order to do this, the right hardware needs to be in place to process trillions of calculations at a rapid pace. Because HPC uses a dense cluster of computers to enable hypercomputing capabilities, it can successfully enable deep learning-ready hardware.

As a matter of fact, it’s difficult to talk about AI hardware without including cloud HPC infrastructure in the discussion. This is why currently, the advanced research application for deep learning is run on HPC node clusters.

This is mutually beneficial. Not only are HPC clusters necessary for modern deep learning algorithms, but deep learning is directly tied to the future of HPC altogether because some of the most robust uses for HPC will involve utilizing these algorithms, like in screening for Alzheimers or breast cancer. 

To truly understand how HPC hardware can lead to the creation better deep learning methods, consider the following:

1. HPC utilizes more specialized processors

Because HPC often utilizes more powerful GPUs, it allows for more purpose-built processing of deep learning algorithms that are very data-intensive. These deep learning algorithms typically require the use of GPUs because of their convoluted neural networks. 

GPUs are optimized and purpose-built to train deep learning algorithms and general artificial intelligence models because, unlike traditional CPUs, they are able to process multiple computations in parallel. Moreover, these computations require a significant amount of bandwidth that would not be available with traditional CPUs. The bandwidth that is made available with GPUs makes it possible to handle large amounts of data all at once. 

2. HPC clusters allow for greater volumes

To maximize deep learning algorithms, a large amount of storage and memory is required to run longer analyses and store larger amounts of data. This is important because deep learning is targeted for accuracy, which is only made possible with significant reinforcement of storage for all the necessary data points.

In 2016, a computer program developed by Google DeepMind won four out of five Go games against 18-time world champion Lee Sedol, in what was known as the Google DeepMind Challenge Match.  Go is a complex board game that is considered to be more difficult than chess, and the machine’s win is said to be comparable to the 1997 chess match where Gary Kasparov lost to IBM’s computer, Deep Blue — the moment where computers had officially become better than humans at chess.

Consider the computing power needed for DeepMind to beat Sedol.  Essentially, AI would have had to play the game, simulating every possibility millions of times before it could even come close to beating any human player, yet alone the world champion.

3. HPC clusters lead to greater implementation efficiency

The algorithmic efficiency needed for deep learning is made possible with HPC node clusters. This is because high performance computing power allows for even distribution of workloads throughout multiple computing approaches, ensuring that all the resources allocated for it are maximized for their best possible use. 

It can also make these systems more efficient in terms of cost when creating deep learning algorithms for enterprises. By accessing cloud HPC resources, companies that want to create these deep learning implementations can pay as they go and avoid more expensive upfront costs that they would incur otherwise. 

Implications for Industry

Deep learning enabled for HPC is so useful for modern enterprises that experts like Andrew Ng have said it will be more significant than electricity — even a Fourth Industrial Revolution. Aside from its ability to host deep learning, HPC is also capable of hosting many other programs necessary for analysis that would not be achievable without the scalable availability of highly intensive computing power.

When it comes to research and development, the possibilities HPC enables for enterprises can’t be understated. It may be safe to say that every industry that uses deep learning will need to allocate HPC services in some way if they want to be successful.

AI and HPC together will help industries create applications with capabilities that the business world has never seen. 

For example, consider how HPC systems typically have between 16 to 64 nodes with more than two central processing units for each of these nodes. Most companies today have applications that only use a couple of CPUs for single devices. Not only will better AI applications be enabled because of this wider availability of CPUs, but cloud HPC’s common use of more powerful GPUs and field-programmable gate arrays will result in even more robust compute power. 

Moving to the Cloud

Virtually all of the workloads associated with the implementation of enterprise deep learning is moving to the cloud, as well as the significant majority of all HPC applications altogether. This is especially true for more large-scale deep learning projects for which cloud HPC is far more compelling. There are many reasons these applications are moving to the cloud:  

Better Applicability

Because deep learning models require additional compute power than traditional models, cloud HPC is the best option for scaling up these projects. Moving both HPC and deep learning algorithms to the cloud enables the best possible applicability for both these technologies.

Availability of Dedicated Tools

The different providers of cloud infrastructures are focusing more and more on services that enable both HPC and deep learning. As more enterprises take a cloud-first approach, there will be more user-friendly interfaces for both technologies, as well as the option to use multi-cloud and hybrid cloud architectures instead of strict on-premises architectures. 

Greater Support

Because of the complexity of both deep learning and HPC projects, moving to the cloud makes it easy to reach a wide availability of expert support for cloud services, instead of having to rely on an in-house team to troubleshoot. This kind of support is key for organizations to be able to  fully understand how to manage their applications and optimize processes while migrating to the cloud. 

More High-End Functions

Because cloud computing has become far more competitive in recent years, the providers of cloud services are increasingly offering high-end hardware options. This means that moving HPC and deep learning to the cloud will help organizations leverage the built-in features and integrations that are made available to them, such as performance and error analysis. This is particularly useful for areas like pharmaceutical research, education, and cybersecurity. 

A More Available Technology

Perhaps most importantly, it is important to remember that in the old days, HPC services were only accessible on-premises. Not only does this come with the costs of building, maintaining, and updating systems, but it also limits allocation of resources geographically.  The migration to cloud HPC will help the creation of advanced deep learning applications for the simple fact that  cloud HPC is readily accessible. 

What the Future Holds

So what does the future hold for enterprises once this convergence of deep learning and HPC finally reaches an inflection point? 

1. Real-time intelligent systems

Consider a future where all cars are autonomous because these self-driving vehicles will now be able to access real-time data made possible through cloud HPC, and then use sensor fusion technology to make more informed real-time decisions in driving. 

The implications for such a use case are incredible. 

This could mean a future where driving is fully automated, dramatically decreasing the number of accidents and fatalities on the roads.  The business potential for such an implementation is also massive, which is why companies like Waymo and Tesla are fully on board with turning this possibility into a reality.

2. Faster and more effective insights

Deep learning will continue to make it possible for companies to obtain more intelligence insights, and HPC will allow instant access to these insights. 

Imagine how this technology can be used in an intelligent supply chain system. With deep learning, HPC, and the right IoT tools working in tandem, logistics-heavy companies like Amazon will see reduced constraints, shipping products faster and with greater accuracy.

Imagine a world where same-day shipping is the norm.

The same thing can be said if products are shipped via drone. Logistical accuracy will be achieved through deep learning, and the availability of this information will be accessible instantly through HPC.

3. Quicker go-to-market strategies

Another clear use for the marriage of these two technologies is the ability for companies to blitzscale new products and get them out to the market quicker. Companies like Pfizer and Moderna have already expressed the usefulness of deep learning when it came to the creation of their proprietary Covid-19 vaccines. At no point in history has a vaccine ever been created as fast and with as much efficacy — cutting the typical time for vaccine development from ten years to less than one. 

And innovations like this will only hasten further as companies continue to embrace cloud HPC.  Medicine will be more effective, buildings will be more environmentally efficient, travel will be safer and faster — the possibilities, as cliche as it is to say, really are endless.

Of course, as technology continues to advance, other technologies will undoubtedly sneak in and further optimize innovative processes. Although HPC clusters will greatly enhance the viability of deep learning, they still use traditional binary to achieve these computations. Now imagine deep learning and HPC paired with a third new radical concept — quantum superpositions of 1s and 0s to achieve even greater compute power. 

Alas, that is a thought experiment for another day.  

But it’s coming.


Author

  • Paolo Besabella

    Paolo Besabella is an AI enthusiast and tech writer. He is the director of the London-based AI adaptation business, Montesclaros LP, and considers himself a disciple of the godfather of deep learning, Geoffrey Hinton. His interest in AI stems from the belief that it will restore a greater sense of humanity to the world.

Similar Posts