Do you consider GPGPU a disruptive technology in the Scientific Computing arena?
that’s why the Tesla’s cost an extra $2k: its not for the RAM, its because they don’t have a video
Scientific computing customer base explosion!!! great milestone.
also with IEEE compliant DP Floating point support….this is impressive.
GPU’s are not a disruptive technology. When applied to the correct field and problem they can provide improvements and solutions that a CPU cannot. A good example is the visualisation cluster at TACC.
I do not see the GPU as a good approach to scientific computaion. Before 2000 I had worked with the intel I860 which I consider a much better approach.
The i860 processor, has been modestly described by Intel as “a Cray on a Chip”, has high levels of integer, floating point, and 3D graphics performance. It was used in UNIX workstations and the Intel iPSC/860 Supercomputer. It has since been discontinued.
I hope in the near future to see pure quantum devices for high end computation. In addition utilizing multiple DSP chips and 1 or 2 RISC chips is a very good approach.
GPU’s are fine from my perspective for a simple inexpensive approach, for initial general computation, but you have to deal with their limitations
If you search CUDA on Google scholar, you get 23,700 papers; most of these are peer reviewed scientific papers that use GPUs.
The i860 had plenty of limitations too. The i960 was a better design. Ultimately, though, when the pentium came out they were faster so the i860 died.
To be positivelly disruptive a new technology need to be 10X better on 1 or multiple parameters , in particular on perf,cost,power ; in fact cost is the single parameter that should be the measure of disruption ; cost is naturally TCO of the solution
Disruptive can also be negative, for exemple the way to program , being disruptive in not expected except if making the coding and optimisation significantly simpler than with our current methods ; this is not the case today
The benefits should be valid for a wide variety of business cases and preferably for the dominant workloads in HPC
When we compare a well tuned code on a X86 code it is very rare to see 10X in performance
On the economy side if you add 1 GPU in a node to be 10X better you need probably to achieve 50X on the performance side
Then the GPU is NOT a disruptive technology, it is an opportunistic solution to achieve a 2X to 4X in certain niches
To make the GPU a disruptive technology the cost need to be radically lower or the memory bandwidth radically increased
Meanwhile the GPU can be an interesting option for a certain part of a workload but we will have very soon many more options like MIC or low power CPU and they have more chance to be disruptive for a wider part of the HPC market
NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!