147,000 Processors Used for Atom-by-Atom Simulation of Nanoscale Transistor

comments Comments Off
By , June 23, 2009 1:29 am

Using 147,000 processors for 15 minutes from the Jaguar system (a Cray XT5) at the Oak Ridge Leadership Computing Facility, “a simulation of electrical current moving through a futuristic electronic transistor has been modeled atom-by-atom in less than 15 minutes by Purdue University researchers.”

“Professor Klimeck and his colleague have demonstrated the unique transformational scientific opportunity that comes from scaling a science application to fully exploit the capabilities of petascale systems like the Cray XT5 at the Oak Ridge Leadership Computing Facility,” Kothe says.

Freely available nanoelectrics software (OMEN) was used from nanoHUB.org to do this simulation.  I am curious about how else this could be applied.  What other nanostructures might we be able to simulate in this way?

For more information, see the source article.

Illumina Offers $48,000 Personal Genome Sequencing–How Will Data be Handled?

comments Comments Off
By , June 12, 2009 11:03 pm
dna_transcription

A depiction of the structure of DNA

Illumina will offer a service to sequence a person’s genome for $48,000 (a doctor’s prescription is required).  Note that this is only the sequencing and not the actual analysis of that sequence data.  The consumer must choose from a few different providers to do the actual analysis of the genome sequence data.  Currently, the representation of a human genome as Illumina is proposing (30-fold coverage of your DNA sequence) would require the transfer of terabytes of data to the company doing the analysis.  Of course, there are various parts to “analysis” so depending on where Illumina stops and the other companies take over, this actually could be a lot less data (e.g., gigabytes).

So this raises at least a couple of possible challenges for Illumina:

  • How will the data be transferred?
  • How will the data be secured?

Transferring the Data

One could see that data transfer of on the order terabytes of data would not be a problem if the turnaround time is long enough.  Although if the service becomes more and more popular, scaling may be a problem (or at least synchronizing network abilities with analysis providers).  Nevertheless, will Illumina establish encrypted network connections with the consumer’s/doctor’s chosen analysis provider?  Will they transfer the data encrypted on external hard drives?  If on external hard drives, how will tracking of the multiple pieces be tracked?

Securing the Data

I’m assuming the security/encryption questions may have answers based off of current electronic health records implementations although I’m not sure if electronic patient information systems are typically interconnected between different health care organizations.  That is, aren’t these systems usually secured/confined within the network of a particular health care organization?  If it is placed on external hard drives and shipped, would the encryption of terabytes of data per patient be challenging?

Using HPC to Understand Swine Flu

By , June 8, 2009 11:29 pm

Here is another great example of how an HPC site can function as a versatile resource for a wide variety of problem domains.  A priority queue was setup on TACC’s Ranger cluster to provide 2,000 to 3,000 processors for two weeks to allow a team to assess the way in which the underlying structure of the Swine Flue virus (H1N1A) has or could mutate and lead to drug resistance.  With this data, “they believe it will be possible to intelligently design a drug or vaccine that can’t be resisted.”

This still from a Quicktime movie represents a view of the drug buried in the binding pocket of the A/H1N1 neuraminidase protein. The animation also shows a 3D surface view of a neuraminidase protein and footage from the actual drug binding simulation.

This still from a Quicktime movie represents a view of the drug buried in the binding pocket of the A/H1N1 neuraminidase protein. The animation also shows a 3D surface view of a neuraminidase protein and footage from the actual drug binding simulation.

From the article cited below:

Supercomputers routinely assist in emergency weather forecasting, earthquake predictions, and epidemiological research. Now, says Schulten, they are proving their usefulness in biomedical crises.

“It’s a historic moment,” he said. “For the first time these supercomputers are being used for emergency situations that require a close look with a computational tool in order to shape our strategy.”

Find more details at Inside the Swine Flu Virus (found via this HPCwire article).

Scalable Staging of Large Datasets to Many Compute Nodes

By , April 9, 2009 11:46 pm

At The Genome Center at Washington University, we are seeing an ever increasing need to align against various reference sequences.  In many cases, hundreds of nodes at time need to access the same input file (e.g., the appropriate reference sequence database).  The size of the file varies depending on the organism and the aligner being used but, in aggregate for hundreds of copies, a terabyte or more might be requested at the same instant.  At startup, all of the jobs grab the same input file at once, which can put a significant toll on our NFS servers and the other unrelated jobs also using the NFS servers.  In some instances, we wanted to copy the input dataset permanently to the local disk on the computational nodes.  However, we can not do that for all possible inputs.

In the past, I had used a tool called rgang (doesn’t seem to be available for download anymore) to distribute files using a distribution tree (e.g., one node would transfer to five others, which in turn would each transfer to five each and so on).  Alternatives to that were other peer-to-peer distribution methods that could ease the burden on the centralized NFS servers while better leveraging the bandwidth available in the cluster’s network switches.

When hearing peer-to-peer many people thought of using the bittorrent protocol so I decided to take a look to see if anyone had applied that to staging large datasets to many compute nodes.  I found that this had been studied in several cases for some years.  See some of the bittorrent links I ran across, especially ones related to data distribution in clusters.  While I had seen bittorrent used in some versions of ROCKS and SystemImager for OS deployment to cluster nodes, I hadn’t seen it used directly for distributing large datasets to compute nodes.  We’ll continue to look into using bittorrent to see if we might be able to decrease the I/O wait time associated with many nodes needing the same input file at the same time.

Open Innovation in Cancer Research

comments Comments Off
By , February 18, 2009 2:43 am

Scientific research often benefits from open innovation.  While there are many examples, I am particularly excited to see what happens in the area of cancer genomics. The Genome Center at Washington University published the results of sequencing the first cancer genome back in November 2008.  Internally, there was collaboration between departments in the School of Medicine resulting in innovative analyses and leading to more discoveries.  Since then I’ve read and heard about a number of similar or follow up projects at varioius institutions.  As data is shared amongst researchers across the world, new collaborations will be formed.  The innovations resulting from these collaborations will hopefully result in better treatments for cancer.

A Human Genome Per Day? The Genome Center at Washington University Scales Up on Illumina Sequencers

comments Comments Off
By , February 6, 2009 2:47 pm

We at The Genome Center at Washington University were happy to get official word that we will be adding an additional 21 Illumina Genome Analyzers to our portfolio of sequencing technology.  That enables us to sequence enough DNA to be equivalent to an entire human genome per day (at 25x coverage).  There is a lot of excitement about the potential such capacity brings.  The Genome Center’s director had this to say:

“Our intention to substantially scale-up with this technology reflects our commitment to large-scale sequencing projects that aim to uncover the underlying genetic basis of various human diseases. With the rapid decline in the cost of whole-genome sequencing, we believe now is the time to embark on initiatives which were previously not possible,” said Richard K. Wilson, Ph.D., Professor of Genetics and Director of the Genome Center at Washington University. “We are confident that we can further reduce the cost and accelerate the rate of human genome sequencing.”

A scale up of sequencing capacity brings a scale up in IT capacity.  We’ll be watching our internal network, disk and HPC resources and scaling as appropriate.  It will be likely that these sequencers alone will generate upwards of 20 TB of data per day, which needs further processing on The Genome Center’s computational resources.  I’m excited about the possibilities that this scale up will bring!

Sequoia: 20 Petaflops, 1.6 million cores, 1.6 Petabytes RAM, 6 Megawatts

By , February 5, 2009 11:50 pm

IBM has won a contract to build a supercomputer, called Sequoia, for the DOE’s NNSA.  It is estimated to be installed and brought online in 2011 and 2012.  It will have 1.6 million cores (from potentially 16-core chips) within 96 racks (in about 3,400 sq. ft.).  It will have around 1.6 Petabytes of memory and achieve about 20 Petaflops.  It will require about 6 million watts of power to operate, which is around 3.3 billion operations per second per watt–very impressive.  I wonder if that includes the power needed for the cooling system.  And is that when the processors are at 100% or when the system is idle?

At 1.6 PB of memory for 1.6 million cores, that is a relatively low amount of memory per core.  If the memory is doubled, for example, the system may require a few more megawatts of power.  This is based off of very rough estimates of power needed per GB of memory based off of some recent commodity clusters.  Do you have any hard numbers on power per GB of memory today?  Any information on the type of memory that might be used in Sequoia?

For more information, see IBM to send blazing fast supercomputer to Energy Dept. and/or U.S. taps IBM for 20 petaflops computer.

Tidal Energy to Provide One Fifth of /Blue/ Data Center Power

comments Comments Off
By , February 5, 2009 12:44 am

From Blue Data Center Will Be Powered by the Tides (found via @tkunau/@ecogeek):

At first, tidal power will only cover one-fifth of the data center’s needs, but Atlantis hopes that if the first phase is successful, they can expand the tidal array to make up the remaining wattage.

Sun Data Center: 165,000 sq. ft. into 700 sq. ft., Reduces Power Usage by 1 Million kWh per Month

comments Comments Off
By , January 27, 2009 1:09 am

Sun’s Colorado Consolidation Saves Millions describes how Sun used Liebert’s XD rack cooling, clear vinyl cold isle curtains and flywheels to increase the density of its data center while also reducing energy consumption.  They reduced 165,000 square feet of data center space into 700 square feet while reducing their monthly power usage by one million kilowatt-hours.

When we considered the XD cooling units, there were two options: chilled water or refrigerant.  In the case of chilled water, there was the question of potential water leaks in these rack-attached units.  With the refrigerant option, there was the question of an increase in the number of condensers and where they would be placed and how much  more maintenance would be needed.  With either option, there is also an increase in the need for maintenance inside the server room amongst the servers, storage, switches, etc.  The obvious benefit of the XD units is the fact that they can provide enough cooling for up to 30 kW in a single rack.  Although, if I recall correctly, there is a limit to the total number of racks with the refrigerant-based version due to limits on the maximum pressure or capacity of the refrigerant in a single system.

As for the vinyl curtains, there is usually more of an objection to their aesthetics.  Personally, I would like to see them installed to help keep the cold air completely contained in the cold aisle, where it is intended.  Especially in raised floor environments with high velocity air flow where the cold air might be pushed outside the confines of the cold aisle without such containment.

One question about Sun’s use of the flywheel: How large are your flywheels?  Flywheels generally supply on the order of ten seconds or so of power, which is usually enough time for generators to kick on but cuts it very close.  What type of services run out of Sun’s Colorado facility?

HPC and the Formation of Jupiter and Saturn

comments Comments Off
By , January 26, 2009 11:09 pm
First-principle simulations have been used to directly determine the miscibility of helium (gold balls) in dense metallic hydrogen (white balls) under the extreme conditions that are present in the interiors of the Jovian planets. Illustration by Kwei-Yu Chu

First-principle simulations have been used to directly determine the miscibility of helium (gold balls) in dense metallic hydrogen (white balls) under the extreme conditions that are present in the interiors of the Jovian planets. Illustration by Kwei-Yu Chu

Physicists at Lawrence Livermore National Laboratory and the University of Illinois at Urbana-Champaign have done First-Principles Molecular Dynamics (FPMD) simulations on LLNL’s high performance computing systems to “determine the equation of state of the hydrogen-helium system at extremely high temperatures (4,000-10,000 degrees Kelvin), similar to what would be found in the interior of Saturn and Jupiter.”   Read more here.

Panorama Theme by Themocracy