One of the areas which pops up from time to time in Cloud conversations is that of High Performance Computing (HPC). The HPC space, perhaps undeservedly so, is a niche market usually comprised of researchers from sectors such as biomedical and aerospace. This are has always interested me because I have a personal affinity toward science and discovery and I feel that often research projects don’t get the full amount of funding needed to be successful. Personal interest aside, HPC in the Cloud is just one more area where large multi-tenant farms of commodity hardware could be put to more efficient use for the greater good.
When looking at use cases for HPC in the Cloud, one must look at the individual workloads that will be processed and make an assessment as to what level of true scale is needed to accommodate multiple workloads. The level of parallelization of the software which will run in the Cloud is of high importance as is the speed at which the Cloud can execute operations. While some software written for traditional on-premise HPC clusters can scale quite easily in a horizontal fashion, some software was simply not written to scale beyond what was probable in terms of on-site equipment. Thus, the first step is to ensure that future software, if not already written as such, should be written to scale horizontally indefinitely. Software that can only scale to a certain ‘width’ can still be run in the Cloud, but without several other pieces of software with similar limitations sharing the same ‘tenant space’, the workloads will be imbalanced. This is simply an issue of categorizing the horizontal scalability of an app and ensuring that resources are aligned as much as possible underneath similar types of applications.
Once we have sorted out how to balance the workloads across the multi-tenant platform, then we can start to process the workload in batches that make ‘resource sense.’ It goes without saying that having workloads ‘trespass’ onto areas where resources are not geared for best performance of that workload will not be the most efficient, but it does offer some flexibility if one type of application workload or the other has terminated early and the underlying resources are free to be re-assigned.
Now, with all this said, it sure does help when a vendor steps up to make the transition easy for researchers and scientists from on-premise equipment to HPC in the Cloud. One such vendor, Logicalis, has done just that for researchers in the UK. The truly remarkable part of what Logicalis has done is allowing many entities to pool funding and purchase high performance computing power. Think of it as a Public Cloud, similar to Amazon, but whose principal clients are researchers in need of HPC infrastructure. It really is a brilliant move and I hope that it is mutually beneficial for Logicalis and the researchers involved. Having a relationship like this work in the field of research and HPC just lends more credibility to Cloud Computing overall.