Menu

Lincoln Laboratory’s new artificial intelligence supercomputer is the most powerful at a university

November 5, 2019

TX-GAIA is tailor-made for crunching through deep neural network operations.
Read this story at MIT News
The new TX-GAIA (Green AI Accelerator) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) has been ranked as the most powerful artificial intelligence supercomputer at any university in the world. The ranking comes from TOP500, which publishes a list of the top supercomputers in various categories biannually. The system, which was built by Hewlett Packard Enterprise, combines traditional high-performance computing hardware — nearly 900 Intel processors — with hardware optimized for AI applications — 900 Nvidia graphics processing unit (GPU) accelerators.
“We are thrilled by the opportunity to enable researchers across Lincoln and MIT to achieve incredible scientific and engineering breakthroughs,” says Jeremy Kepner, a Lincoln Laboratory fellow who heads the LLSC. “TX-GAIA will play a large role in supporting AI, physical simulation, and data analysis across all laboratory missions.”
TOP500 rankings are based on a LINPACK Benchmark, which is a measure of a system’s floating-point computing power, or how fast a computer solves a dense system of linear equations. TX-GAIA’s TOP500 benchmark performance is 3.9 quadrillion floating-point operations per second, or petaflops (though since the ranking was announced in June 2019, Hewlett Packard Enterprise has updated the system’s benchmark to 4.725 petaflops). The June TOP500 benchmark performance places the system No. 1 in the Northeast, No. 20 in the United States, and No. 51 in the world for supercomputing power. The system’s peak performance is more than 6 petaflops.
But more notably, TX-GAIA has a peak performance of 100 AI petaflops, which makes it No. 1 for AI flops at any university in the world. An AI flop is a measure of how fast a computer can perform deep neural network (DNN) operations. DNNs are a class of AI algorithms that learn to recognize patterns in huge amounts of data. This ability has given rise to “AI miracles,” as Kepner puts it, in speech recognition and computer vision; the technology is what allows Amazon’s Alexa to understand questions and self-driving cars to recognize objects in their surroundings. The more complex these DNNs grow, the longer it takes for them to process the massive datasets they learn from. TX-GAIA’s Nvidia GPU accelerators are specially designed for performing these DNN operations quickly.
TX-GAIA is housed in a new modular data center, called an EcoPOD, at the LLSC’s green, hydroelectrically powered site in Holyoke, Massachusetts. It joins the ranks of other powerful systems at the LLSC, such as the TX-E1, which supports collaborations with the MIT campus and other institutions, and TX-Green, which is currently ranked 490th on the TOP500 list.
Kepner says that the system’s integration into the LLSC will be completely transparent to users when it comes online this fall. “The only thing users should see is that many of their computations will be dramatically faster,” he says.
Among its AI applications, TX-GAIA will be tapped for training machine learning algorithms, including those that use DNNs. It will more quickly crunch through terabytes of data — for example, hundreds of thousands of images or years’ worth of speech samples — to teach these algorithms to figure out solutions on their own. The system’s compute power will also expedite simulations and data analysis. These capabilities will support projects across the laboratory’s R&D areas, such as improving weather forecasting, accelerating medical data analysis, building autonomous systems, designing synthetic DNA, and developing new materials and devices.
TX-GAIA, which is also ranked the No. 1 system in the U.S. Department of Defense, will also support the recently announced MIT-Air Force AI Accelerator. The partnership will combine the expertise and resources of MIT, including those at the LLSC, and the U.S. Air Force to conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems.
Story image: TX-GAIA is housed inside of a new EcoPOD, manufactured by Hewlett Packard Enterprise, at the site of the Lincoln Laboratory Supercomputing Center in Holyoke, Massachusetts. Photo: Glen Cooper
 
 

Research projects

Foldit
Dusty With a Chance of Star Formation
Checking the Medicine Cabinet to Interrupt COVID-19 at the Molecular Level
Not Too Hot, Not Too Cold But Still, Is It Just Right?​
Smashing Discoveries​
Microbiome Pattern Hunting
Modeling the Air we Breathe
Exploring Phytoplankton Diversity
The Computer Will See You Now
Computing the Toll of Trapped Diamondback Terrapins
Edging Towards a Greener Future
Physics-driven Drug Discovery
Modeling Plasma-Surface Interactions
Sensing Subduction Zones
Neural Networks & Earthquakes
Small Stars, Smaller Planets, Big Computing
Data Visualization using Climate Reanalyzer
Getting to Grips with Glassy Materials
Modeling Molecular Engines
Forest Mapping: When the Budworms come to Dinner
Exploring Thermoelectric Behavior at the Nanoscale
The Trickiness of Talking to Computers
A Genomic Take on Geobiology
From Grass to Gas
Teaching Computers to Identify Odors
From Games to Brains
The Trouble with Turbulence
A New Twist
A Little Bit of This… A Little Bit of That..
Looking Like an Alien!
Locking Up Computing
Modeling Supernovae
Sound Solution
Lessons in a Virtual Test Tube​
Crack Computing
Automated Real-time Medical Imaging Analysis
Towards a Smarter Greener Grid
Heading Off Head Blight
Organic Light-Harvesting Antennae
Art and AI
Excited by Photons
Tapping into an Ocean of Data
Computing Global Change
Star Power
Engineering the Human Microbiome
Computing Social Capital
Computers Diagnosing Disease
All Research Projects

Collaborative projects

ALL Collaborative PROJECTS

Outreach & Education Projects

See ALL Scholarships
100 Bigelow Street, Holyoke, MA 01040