The EU last week announced plans to double its investment in supercomputers: high-performance machines the size of a large house, which cost around £100mn each to build, are more powerful than 130,000 laptops combined and incur maintenance cost in the region of £18mn per year.
Expensive as supercomputers are, EU Commissioner for the Digital Economy, Neelie Kroes, believes they are are indispensible, especially in these cash strapped times: "High Performance Computing (HPC) is a crucial enabler for European industry and for more jobs in Europe," she said last week. "It's investments like HPC that deliver innovations improving daily life”
Supercomputers are often used by governments to run forensics, health service systems or within educational institutions. The University of Edinburgh's Advanced Computing Facility (ACF) possesses not one but two supercomputers, HECToR and BlueGene/Q, both were upgraded this month. It marks the next chapter in the UK's supercomputing program. The computers can deliver complex computer simulations across a range of scientific disciplines and are funded by four of the UK research councils, EPSRC, STFC, NERC and BBSRC.
According to Professor Duncan Wingham, NERC's Chief Executive, HECToR’s increased capacity will help UK researchers' work in forecasting the impact of climate change, fluctuations in ocean currents, projecting the spread of epidemics, designing new materials, studying the structure and evolution of the universe and developing new medicinal drugs, among other uses. He said:
"HECToR is central to the delivery of NERC's high priority science, particularly in climate, oceanography and the dynamics of the deep interior of the Earth. Access to this new phase of HECToR is essential to maintain the UK's global position in cutting-edge environmental science, and to address key challenges — for example, in contributions to the next Intergovernmental Panel on Climate Change report."
HECToR (High-End Computing Terascale Resources) is the UK's largest, fastest and most powerful supercomputer. It is capable of over 800 million million calculations a second. HECToR phase 3 uses the latest 'Bulldozer' multicore processor architecture from AMD, which theoretically allows twice the performance over the old architecture used in phase 2b. It’s hoped that exploiting these new architectures will place the UK at the forefront of scientific software development.
Meanwhile BlueGene/Q is the most energy-efficient supercomputer ever built. It can perform the calculation of 100 laptops using the same level of electricity used to power a light bulb. It has been top of the Green500 ranking since November 2010.The University of Edinburgh BlueGene/Q computer chip is the result of a unique knowledge transfer and industrial partnership activity with IBM. It is part of the Science & Technology Facilities Council's DiRAC facility that provides specialized advanced HPC capability for some of the world's most complicated scientific problems in astronomy and particle physics.
The new updated machine will allow UK particle physicists to provide precise theoretical input, needed in their search for new physics on high energy particle experiments such as the Large Hadron Collider. It focuses on solving the theory of the strong nuclear force to understand the properties of the bound states of quarks and gluons that form familiar particles like the proton and neutron in the atomic nucleus.
Providing early access to the machine gives the UK the edge in exploiting this new technology for science. This year BlueGene/Q will be upgraded to a 1•26 petaflops combined system (1 petaflop is 1000 teraflops), making it one of the fastest computers in Europe and giving the UK a world-leading simulation capability which it’s hoped will match those of our US and Japanese competitors.
But is it enough? No one ever formally declared a supercomputing race, but every November and June, an independent organisation re-evaluates the 500 most powerful known machines in the world and ranks them at Top500.org. In recent years China and Japan have hogged the top five spots in a field where America was once the dominant player. Japan’s K Computer, built by Fujitsu, currently tops the list with 10.51 pethaflops, which puts the UK’s BlueGene/Q somewhat in the shade. China’s Tianhe-1A comes in second and a Cray computer at Oak Ridge laboratory in Tennessee is number three. But this status quo is unlikely to last: three of America’s national labs are being upgraded this year and one of their supercomputers, Miro, is due to weigh into the top three at the end of the year. However, China is also moving on with plans to build 17 supercomputing centres with machines at a petaflops or more.
The early supercomputers were the preserve of governments, academic research institutes and the military but two developments have brought HPC within reach of businesses, such as the car and aviation industries. The first of these is pretty obvious: Moore’s Laws together with the drop in the price of computer parts. But the second is less so: the growth of computer games.
Gaming as a pastime has fuelled an entire industry of components and technology designed for play, but which is also being put to use in areas as diverse as aircraft design and financial markets. IBM has developed a 3D emulator for aircraft design and maintenance using PlayStation 3 games consoles; in fact, the PlayStation itself shares many technical characteristics with a supercomputer. Software designers, meanwhile, have captured the power of graphics cards from manufacturers such as AMD and Nvidia to turn their chips, designed for 3D games graphics, into powerful machines for solving complex equations. This type of increase in power allows researchers to tackle problems which previously were seen as out of reach, such as mapping parts of the human brain or simulating the creation of a black hole. But in business, they are bringing a different set of benefits.
Today’s supercomputers are cheap enough for large businesses to buy and small enough to fit under a desk, but above all, they are quick. One French investment bank, which upgraded its technology, moved from a share-price prediction model it could run once a week to one that it could run every day. More frequent pricing predictions allow banks to fine- tune their strategies and develop a much better understanding of trading risks. And, according to CNBC Business, these HPC’s are giving rise to a new type of bank, known as a ‘flow monster’ – a bank that makes its profits on commodities and derivatives by trading on very small market movements and margins, but in huge volumes. This is only possible with the automation that HPC provides.
Such innovations are not, though, without their own problems. The so-called ‘flash crash’ of 6 May 2010, which caused shares to collapse on US markets, happened because computers misunderstood a single large trade. According to the Financial Times, an investigation found that one investor’s quick action to hedge a position by selling futures prompted other banks’ automated systems to start selling, fearing a market downturn. Other systems then joined in until authorities, almost literally, pulled the plug. Regulators have since put in place new rules to stop wild swings being caused by computer trading. But the incident illustrates the danger of allowing computers to take charge.
A specialist, can spot unexpected outcomes or outlying data through inference and intuition. A computer has to keep crunching the numbers. And as computers cannot, for now, learn in the way that humans learn, they are only as good as the data that humans programme into them. It will still be some time before we have supercomputer autonomous aircraft.
However there are some industries where supercomputing is proving super successful. Danish wind turbine-maker Vestas uses an IBM-designed Firestorm supercomputer to design its turbine blades, monitor power output and forecast weather patterns for wind farms. The system is currently the third- largest commercially owned computer in the world. The database holds global meteorological information going back 11 years, with 160 separate parameters including wind direction, atmospheric pressure, cloud cover and rainfall and this data helps buyers of wind turbines to understand, not just how one machine works, but how they function together as a wind farm. Buyers can therefore balance the running costs of a turbine against the electricity it can produce. According to Lars Christian Christensen, Vestas’ vice president for plant siting and forecasting, the computer needs to run 24/7 in order to do its job: “Even the large research institutes cannot deliver the results we demand”, he says.” It is part of our competitive advantage to have this equipment in-house."
Contrast our own beleaguered Met Office. This week Parliament's science and technology committee recommended that new powerful computers were needed for the UK’s weather forecast centre. Extreme weather warnings, more accurate long-term forecasts, improved climate modelling and the associated public benefits are all, according to the committee, being held back by insufficient 'supercomputing' capacity.
The Met Office currently seeks to make up for its lack of supercomputing prowess vai collaboration with the resources of the international meteorological community but this has limited use.
The Met Office told the committee that delivering improvements to its forecasting "would require a supercomputer with at least twice the capacity of the near one petaflop facility now being implemented". The cost of this, including associated infrastructure, depreciation, power, service and maintenance charges, and staff costs for developing modelling infrastructure, would be £14m each year over three years.
This is steep but a new Met Office supercomputer was also estimated to be capable of delivering as much as a ten-to-one return on investment. What’s more, we needn’t be caught out by unexpected snow showers quite so often.