Artificial intelligence tools that allow us to understand and predict business behavior based on the analysis of data, Internet of Things devices that send large volumes of information, virtual reality or augmented reality applications that are gaining ground, 3D image processing for different industries, analytic tools that process millions of data in real time…
Data is today a source of value that allows a company to obtain competitive advantages or generate a unique experience for its customers, as well as helping a scientific group to make a discovery that improves the lives of millions of people.
As business applications become more demanding in terms of processing capacity, the concept of high-performance computing (HPC) gains more relevance: solutions capable of calculating up to quadrillions (one million trillion) in just one second. Its best-known form is the supercomputer: thousands of computational nodes that combine their power and work together simultaneously to solve a problem. The consulting firm Hyperion Research, for example, estimates that the global HPC market will reach US$44 billion by the end of 2022.
High performance uses
What are the most common uses for HPC? They are numerous and diverse, although in general they have one component in common: the processing must be done in real time.
In the health industry, HPC is applied to generate medical advances and discover new vaccines and treatments. In genomics, it was the key technology for DNA sequencing.
Also, some governments have already implemented it to track, follow and forecast natural disasters.
For the aerospace industry, it is the opportunity to test complex simulations and generate safer ships.
In the financial industry it is used to detect fraud. On the other hand, it is also used for streaming a sporting or cultural event with a massive reach.
In the oil & gas industry, for the precise identification of wells or assets… applications have no limits and depend on the creativity of each organization.
The basic infrastructure
In general terms, a typical HPC infrastructure must contemplate three aspects: the computational processing, the network and the storage. The architecture is usually made up of servers connected in a cluster: this means a huge number of computers working simultaneously to increase performance.
All this work can be done in two ways:
- Inherently parallel workloads, which involves breaking the problem into small, independent tasks and distributing them across servers to run at the same time—usually without communicating with each other—and then returning the result.
- Tightly coupled workloads, which consists in taking a large task and dividing it into nodes, with the requirement of having to communicate with each other to complete it.
The nodes in turn are networked to the storage. One of the great challenges in terms of infrastructure is to achieve a balance between these components, so that they all work at an equivalent rate and no bottlenecks are generated.
The cloud challenge
Historically, HPC implementations have been done on-premise, at the company’s physical facilities. This implied a very important investment -exclusive for many organizations-, high management costs, the need for specialized resources to maintain the operational architecture and the frequent investment of money to keep the infrastructure updated.
The new challenge is to bring the concept of HPC to the cloud: providers are always at the forefront of technology, which eliminates the risk of obsolescence, and often offer approaches that are much more secure. But a bigger challenge here is to find the right provider that guarantees not only the performance it promises but also that shows experience with this topic.
HPC in a cloud scheme: a way to bring computing power to the cloud. Literally and metaphorically.