fbpx

Cloud based HPC for quant development… Let’s go ahead and throw it out for discussion!

(Last Updated On: November 30, 2011)

Cloud based HPC for quant development… Let’s go ahead and throw it out for discussion!

 

==

We look at the Cloud based HPC as essential to leverage innovation, particularly for small to medium sized technology driven companies. Using our own strategy to provide a variety of enterprises high quality 3D Seismic software, via the Desktop and HPC servers, we will enable many companies, large and small, the ability to manage process and workflows to QA seismic data, prepare velocity and 3D Seismic cubes – then send the very large jobs to the Microsoft Cloud for HPC on a ‘buy by the drink’ basis.

Obviously companies that had to buy 3D products from the big boys with massive in-house compute capabilities will now have an option to perform the same computations without investing in huge CAPEX for Data Center capability – and intitiate the computations from the Desktop/HPC server to the Cloud from anywhere there is a high speed internet connection.

Even the large enterprise like an Exxon with incredible resources will have options regarding future data center investments when large scale compute and storage requirements are the factors driving excess capacity over day to day operations of the enterprise.

Microsoft, Google and Amazon are definitely opening up opportunities to rethink not only HPC, but also Co-Lo and backup strategies for IT.

 

==

I sometimes wonder if HPC based Cloud doesn’t have a greater long-term value. Cloud based HPC is — in some ways — the latest instance of timeshare computing. Past instances have failed to survive the business cycle. The volume of data that can be generated and subsequently need transmitted across a wan could be prohibitive as well.

It is probably not an issue of survival any more, as Amazon, Google, et al, have broken through the sustainability barriers. However, private clouds of administrative applications running on in-house HPC capacity seems to make good sense for large scale HPC users.

 

==

I would definately consider Microsoft Azure in the cloud and a Server 2008 R2 SP1 with HPC Pack SP2 on premise. With some HPC app configuration on your virtual machine master before you upload it to AzureVM and some .Net programming to create an Azure Storage payload upload app. You can burst your HPC job to as many nodes as you can afford. As long as you have the upload bandwidth for your payload.http://bit.ly/ma4Qvs

 

==

You have pretty well nailed our architecture for all the reasons you cited.

I have been engaged in ‘timesharing’, facilities management, etc, since my days at GE ISBD in early 70’s. The whole notion of options to ‘partially loaded’ in house capacity, or lack of access to excess capacity due to unforseen or unknown spikes – has been the driver of timesharing, to powerful/cheap mini computers, to facilities management, to a variety of Business On-Line services – as you note.

To me the romance of ‘Cloud’ computing is twofold – one is access to/from large scale IT Center assets via VPN or Internet with a ‘buy by the drink’ pricing model, or real Co-Location access for Back Up and Disaster recovery options. What Google, Amazon and Microsoft are doing is shaping their Cloud/IT infrastructure to provide a specific competitive edge for their particular business model… and many huge enterprises are building analogues in the form of In-House Clouds.

At the end of the day our business model will not be sustainable in competition with in-house clouds.

 

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.

NOTE!

Check NEW site on stock forex and ETF analysis and automation

Scroll to Top