Wednesday, March 24, 2010

CSIRO Super Computer Replacement

The CSIRO has issued a Request for Tender for a "ASC High Performance Computing Cluster". This is to replace an IBM e1350 cluster system (nicknamed "burnet") in Melbourne. There is a 1.30 MB Zip file of tender documents, including a two page statement of requirements, three page software technical specifications (essentially stating a requirement for Linux based software) and a 77 page draft hardware contract.

I wasn't able to find any details as to where the new computer was to be located nor energy
performance requirements. Obviously the computer should not be place at the location of the existing machine, which is taking up prime city office space in Melbourne. Such systems should be in purpose built facilities on a low cost industrial estate, where they can be equipped with low energy and cooling systems. There is no need for the computer to be located at an inner city office, as the computer will be operated remotely over a network. The obvious place to put the computer would be in one of the federal government's new data centres.
2 Requirements – CSIRO ASC Compute Cluster
2.1 ASC Cluster Computer Overview
CSIRO Advanced Scientific Computing (ASC) wishes to renew its existing IBM e1350 cluster system (burnet) ...

The renewed system is targeted to provide services in addition to those available to CSIRO through its partnerships with the Bureau of Meteorology, the National Computational Infrastructure (NCI) and iVEC, and in addition to those available internally within CSIRO, such as the GPU cluster.

The system is targeted to provide:
  • nodes with more memory than other available systems
  • services for commercial-in-confidence computing that must be done on CSIRO hosts
  • services that require access to specialised software that cannot be provided on the partnership systems
  • services that require a more flexible environment than cannot be easily provided on the
  • partnership systems
  • close integration with the CSIRO ASC Data Store ...
  • specialised cluster services for CSIRO Mathematics, Informatics & Statistics (CMIS)
  • a development platform for CSIRO Marine & Atmospheric Research ...
  • a global file system across the cluster ...
From: Statement of Requirements, ASC High Performance Computing Cluster, CSIRO, ATM ID CSIRORFT2010004, 22-Mar-2010

Labels: , ,

Monday, December 14, 2009

Supercomputer from game console components

Greetings from the famous room N101 at ANU where Wayne Luk from Imperial College London is talking on "A Heterogeneous Cluster with FPGAs and GPUs". He started by apologising the talk will not be polished as the work is very new and they are just starting to get results. He then gave us a quick tourist's guide to Imperial, which is near Kensington Palace and the Albert Hall. He argues that techniques for embedded systems could be applied to high performance computing. This is counter-intuitive as embedded computing is usually used for low cost small scale computing in consumer goods, whereas supercomputers have been made from high cost, high performance custom components.

The concept is that an application written in a conventional programming language would be compiled partly into code for a conventional processor and partly into configuration information for customisable chips. This could be used for applications from supercomputers to distributed applications using "smart dust".

The application would used Field-programmable gate arrays (FPGA). These are now used in consumer equipment, such as LCD TVs. FPGAs are very efficient in terms of cost and processing power per unit of energy used. But programming FPGAs is complex. FPGAs have high speed serial interfaces which allow them to be used together. Examples are the Stratix III and Stratix IV. Imperial have produced an 8 x 8 "cube" of FPGAs for emulating processors ("MUMAlink" Interconnect Fabric), and for prototyping the entertainment system in a car.

G
raphics processing units (GPUs) have multiple processors, a shared bus and memory on a chip. As a result they are less customisable and less power efficient than FPGAs, but they are easier to program. Ideally FPGAs and GPUs would be combined with conventional processors in the one system for maximum flexibility. This approach differs to the one investigated in "Comparison of GPU and FPGA hardware for HWIL scene generation and image processing" (by Eales and Swierkowski, DSTO Weapons Systems Division, 2009).


Imperial has a 16 node cluster "Axel", with an AMD CPU, C1060 GPU and Vpf5 FPGA, connected by Gigabit Ethernet and Infiniband on the FPGA. This has a "non-uniform node" architecture: there is a CPU, GPU and a FPGA in each node, with these connected on a common backbone. Initially a Single Program Multile Data design was used for simplicity. The backbone has Gigabit Ethernet plus Infiniband.

Linux runs on each node, using NFS. There is a custom resource manager and public domain cluster manager (openMP and OpenMPI). There is a communications bottleneck with data having to pass through the CPU from the FPGA to the GPU. Direct communication would be desirable but difficult.

The question then is what are common patterns of parallelism which the system could support. The "Berkeley Dwarfs" offers a set of common patterns.

The new Intel Atom chip (codenamed "Pineview") due in early 2010, is rumoured to have an integrated graphics core, which could be useful for low cost systems.

Iridium is planning a new generation of communication satellites with provision for an earth observation payload. It might be interesting to see how much processing could be usefully put on-board. The processing might be reprogrammable to to communications or processing as required and depending on where in the orbit they are. The Iridium satellites can only carry out their primary function of communications during a small part of their orbit. The rest of the time the satellite could carry out observations and process data.

Labels: , ,

Sunday, December 13, 2009

Building a supercomputer from game console components

Wayne Luk from Imperial College London will talk on "A Heterogeneous Cluster with FPGAs and GPUs" at the ANU, 14 December 2009. GPUs (Graphics Processing Units) are used to offload complex image processing from the main processor in PCs and games consoles. FPGAs (Field Programmable Gate Arrays) are more flexible devices which can be reconfigured for custom applicators. These cips have become popular as a way to design low cost specalised supercomputers. But no one is exactly sure of the best design for such a supercomputer. Thus the need for research to find out how. Apart from research these systems have applicaiton in predicting climate change and cracking encryption codes.
                   Seminar Announcement
School of Computer Science, CECS
The Australian National University

Date: Monday, December 14, 2009
Time: 11:00 am to 12:00 noon
Venue: Room R214, Ian Ross Building [31]

Speaker: Wayne Luk

Title: A Heterogeneous Cluster with FPGAs and GPUs

Abstract:

This talk describes a heterogeneous computer cluster called Axel. Axel contains a collection of nodes; each node can include multiple types of accelerators such as FPGAs (Field Programmable Gate Arrays) and GPUs (Graphics Processing Units). A Map-Reduce framework for the Axel cluster is presented which exploits spatial and temporal locality through different types of processing elements and communication channels. The Axel system enables experiments involving FPGAs, GPUs and CPUs running collaboratively for applications in high-performance computing, such as N-body simulation.

Biography:

Wayne Luk is Professor of Computer Engineering at Imperial College London. He was a Visiting Professor at Stanford University. His research interests include theory and practice of customizing hardware and software for specific application domains, such as multimedia, financial simulation, and biomedical computing. He is a fellow of the IEEE and the BCS.

From: "A Heterogeneous Cluster with FPGAs and GPUs", ANU, 2009

Labels: , ,

Thursday, February 12, 2009

Australian prototype single digital backend computing system

CSIRO have issued a request for Expressions of Interest for the "Australian SKA Pathfinder (ASKAP) development of prototype single digital backend computing system" . This is to build a specially designed high speed computer for processing data from a new radio telescope. While intended for this one scientific task, the computer design is likely to have application in other areas. As an example, the Australian Defence Department's JORN over-the-horizon radar has similar processing requirements to a radio telescope.
CSIRO Australia Telescope National Facility (ATNF) is the lead agency for the new Australian SKA Pathfinder (ASKAP) radio telescope and associated infrastructure in Western Australia. ASKAP will demonstrate key technologies of the Square Kilometre Array (SKA), develop the world's best site for centimetre- and metre-wave astronomy and deliver the world's best radio astronomy survey instrument. ASKAP will be a world-leading astronomy research facility in its own right.

CSIRO seeks Expressions of Interest from suitably qualified Suppliers to engage in a mulit-year collaborative partnership to develop a highly-specialised computing system termed the 'Single Digital Backend' (SDB). The SDB is a strategic development to demonstrate a potentially very effective method of handling digitised data from future large astronomical arrays such as SKA. Details fo the scope of the SDB collaboration are detailed in the EOI documentation.
Other Instructions

Some explanation of the ASKAP project is available at -

http://www.atnf.csiro.au/projects/askap/

An overview of Australia's approach to the international SKA project is available at -

http://www.ska.gov.au/

A public brief will be conducted at CSIRO ATNF, Marsfield, Sydney, NSW at 3.30pm on Monday 23 February 2009. You must register to attend the brief before the event - please email the EOI contact officer. Please note that attendance at the brief is non-mandatory, but encouraged.

Due to the strategic importance of the SKA Project to Australia, CSIRO has developed an Australian Industry Participation Plan www.atnf.csiro.au/projects/askap/CSIRO_ASKAP_AIPP_Version1_web.pdf which outlines how Australian companies are given full, fair and reasonable opportunity to supply goods and services to the ASKAP project.

The Department of Innovation, Industry, Science and Research is offereing interested Australian companies the opportunity to prepare a pre-briefing capability statement to promote their expertise to potential prime Suppliers. These statements will be provided to the primes prior to the briefing to assist them in identifying Austsralian companies that could contribute to their SDB collaboration on a commercial basis.

Interested Australian companies are requested to contact Mr Grant Wilson, Manager Aereospace and Marine Industries at the Departement of Innovation as soon as possible (email Grant.Wilson@innovation.gov.au).

Labels: , , ,

Thursday, September 18, 2008

Building supercomputers from computer game chips

Eric McCreath/Eric McCreath is giving a seminar at the ANU on building supercomputers from computer game chips. He will be talking about using the Cell Broadband Engine (from Sony's PlayStation 3 game console) and the NVIDIA 8800 GPU (from PC gaming graphics cards) for scientific applications. The seminar is free and there is no need to book:

DCS SEMINAR SERIES

Using the Cell Broadband Engine and NVIDIA 8800 GPU for Computational Science Applications: A Particle Dynamics Comparison

Eric McCreath (DCS, ANU)


DATE: 2008-09-25
TIME: 16:00:00 - 17:00:00
LOCATION: CSIT Seminar Room, N101


ABSTRACT:
The NVIDIA 8800 Graphics Processing Unit (GPU) and the Cell Broadband Engine employ a vast amount of parallelism to produce low cost high performance systems which dwarf standard desktop processing units in terms of floating point calculations. These systems offer great potential for computational science applications. This presentation compares the programming model, implementation strategies and realised performance achieved on these two systems for implementing a simple particle dynamics simulation code. Both systems were found to give considerable performance improvements over high-end uni-processor machines. The Synergistic Processing Elements (SPE), on the Cell, can not directly access main memory. This complicates initial implementation compared to the NVIDIA GPU, however, fully exploiting the complex architectures of both systems is equally challenging.

BIO:
Eric McCreath completed his Ph.D. degree in 1999 from the University of New South Wales. This was on research involving Inductive Logic Programming(ILP) which is a sub-field of Machine Learning. He joined the Basser Department of Computer Science(now the School of Information Technologies) at Sydney University in 1999 as a lecturer and then in 2001 he joined the Department of Computer Science at the Australian National University. Dr McCreath currently holds a lecturing position at the ANU and is pursuing research in the Computer Systems research group.

Labels: , , ,