Menu

NET2 and NESE Debut at SC25 Network Research Exhibition with High-Speed Data Transfer

December 4, 2025

UMass NET2 and Harvard NESE teams come together to showcase advanced networking and storage technologies, with NESE achieving sustained transfers of 700 Gb/s, paving the way for future large-scale scientific workflows.

Story provided by Rafael Coelho Lopes de Sa (NET2, UMass), Eduardo Bach (NET2, UMass), and Milan Kupcevic (NESE, Harvard).

The 2025 edition of the annual International Conference for High Performance Computing, Networking, Storage, and Analysis Conference (SC25) just wrapped up in St. Louis this week with over 16,500 attendees. As in previous years, the MGHPCC was present during the conference showcasing almost 80 computationally intensive research projects being conducted at the center as well as the new partnership with the Massachusetts AI Hub.

A novelty this year, the MGHPCC participated for the first time in the Network Research Exhibitions (NRE) event of SC25. Around 20 NREs were organized this year to showcase achievements in High Performance Computing, Networking, Storage, and Analysis. Led by the NET2 team from the University of Massachusetts Amherst, our NRE demonstrated advanced data-transfer capabilities of the NESE system at the MGHPCC.

MGHPCC booth during SC25

Two years ago, as part of NESE’s mission to create a data-centric infrastructure via co-design with science projects at the MGHPCC, UMass Senior Research Scientist Eduardo Bach and Harvard Senior Engineer Milan Kupcevic launched a new dCache service optimized for high-throughput computing systems. At the MGHPCC, the system has been essential for the NET2 cluster which receives large amounts of data from the Large Hadron Collider at CERN for processing.

To be able to transfer data to national and international collaborators with a bandwidth that matches the throughput of the dCache NESE cluster, the system was equipped with advanced capabilities which benefit from collaborative projects at the MGHPCC. Funded by an NSF CC* project (OAC-2346286, UMass PI Rafael Coelho Lopes de Sa), the dCache system was connected via the regional network NEREN to two 400Gb/s-capable WAN links between the MGHPCC, NYC and Boston. In NYC and Boston, the system peers with major research networks like ESnet and Internet2, private internet providers, and testbeds like the NSF FABRIC network. The new connections were configured at the MGHPCC by UMass engineer Jessa Westclark, and the integration with NESE was performed by Harvard engineer Nick Amento.

The two new network connections were integrated into the ESnet SENSE orchestrator project which allows for the creation of dedicated internet connections with high quality-of-service between data centers and labs across the world. Through SENSE, they can be turned into on‑demand, software‑defined circuits with reserved bandwidth. The ability to create high quality-of-service, experiment‑specific network paths will be critical for the next generation of data‑intensive science.

Our NRE demonstration aimed to transfer data at a rate of 800 Gb/s between the NESE disk servers and National Research Platform (NRP) servers in the SCinet cluster on the SC25 show floor. Achieving this with real disk storage, rather than simpler memory‑to‑memory tests, required deep optimization of the full grid data‑transfer stack on 38 commodity storage servers. Bach and Kupcevic led kernel‑level optimization on each node and cluster‑level optimization of the dCache service to match the SENSE‑provisioned network capacity.

With the optimized NESE dCache cluster and high‑priority SENSE circuits, the demonstration sustained 700 Gb/s from MGHPCC to the SC25 show floor and almost 400 Gb/s in the reverse direction. All transfers used the same grid tools and workflows that move LHC data from CERN to MGHPCC today, directly exercising the production path needed for the High-Luminosity LHC era and for other next-generation projects that rely on large amounts of data.

Transfer rate during NRE demonstration. We achieved 700 Gb/s transmissions from NESE disk servers to the SCinet cluster on the show floor, and almost 400 Gb/s in the reverse direction.

With the fully optimized NESE dCache cluster and the high-priority SENSE network, the demonstration sustained 700 Gb/s from MGHPCC to the SC25 show floor and almost 400 Gb/s in the reverse direction. All transfers used the same grid tools and workflows that move LHC data from CERN to MGHPCC today, demonstrating our readiness to the HL-LHC project and for other future large‑scale projects.
The NRP system was used by several NRE demonstrations during SC25, with the highest throughput coming, by a large margin, from NESE. These results open the doors for even higher transfer rates with modern NVMe storage and multi-Tb/s network connections.

Rate of data transferred to the NRP servers during the SC25 Network Research Exhibition. The NESE system at the MGHPCC was responsible for more than half of the rate during this year's demonstrations.

Over the three days of exhibition, more than 200 PB of data was transferred in and out of the SCinet cluster. In addition to the volume of data, the demonstration showcased advanced capabilities of the NESE dCache system that will be critical for next‑generation experiments: high‑priority network connections with reserved bandwidth (in collaboration with ESnet, as described above) and FPGA‑based tools for packet marking, monitoring, and routing (in collaboration with the OCT project at MGHPCC). Together with SENSE and NRP, these capabilities make MGHPCC, NET2, and NESE a practical platform where large collaborations can prototype and operate end‑to‑end workflows.

The authors of NRE124 would like to acknowledge the contributions of the UMass Information Technology to the SCinet cluster (at the silver level) and the WAN connection from NYC and Boston to St. Louis provided by Zayo and ESnet. The authors of NRE124 would also like to thank the collaboration with NESE/MGHPCC, NET2/MGHPCC, OCT/MGHPCC, UMass Research computing, Harvard Research Computing, NEREN, NSF FABRIC, National Research Platform, and the DOE ESnet.

Related

MGHPCC Goes to SC25 (MGHPCC News)

Tags:
Previous Post:

Research projects

The US ATLAS Northeast Tier 2 Center
Yale Budget Lab
Volcanic Eruptions Impact on Stratospheric Chemistry & Ozone
Towards a Whole Brain Cellular Atlas
Tornado Path Detection
The Kempner Institute - Unlocking Intelligence
The Institute for Experiential AI
Taming the Energy Appetite of AI Models
Surface Behavior
Studying Highly Efficient Biological Solar Energy Systems
Software for Unreliable Quantum Computers
Simulating Large Biomolecular Assemblies
Revolutionizing Materials Design with Computational Modeling
Quantum Computing in Renewable Energy Development
Pulling Back the Quantum Curtain on ‘Weyl Fermions’
New Insights on Binary Black Holes
Network Attached FPGAs in the OCT
Monte Carlo eXtreme (MCX) - a Physically-Accurate Photon Simulator
Modeling Hydrogels and Elastomers
Modeling Breast Cancer Spread
Impact of Marine Heatwaves on Coral Diversity
IceCube: Hunting Neutrinos
Global Consequences of Warming-Induced Arctic River Changes
Exact Gravitational Lensing by Rotating Black Holes
Evolution of Viral Infectious Disease
Evaluating Health Benefits of Stricter US Air Quality Standards
Ephemeral Stream Water Contributions to US Drainage Networks
Electron Heating in Kinetic-Alfvén-Wave Turbulence
Discovering Evolution’s Master Switches
Dexterous Robotic Hands
Developing Advanced Materials for a Sustainable Energy Future
Detecting Protein Concentrations in Assays
Denser Environments Cultivate Larger Galaxies
Deciphering Alzheimer's Disease
Dancing Frog Genomes
Asteroid Data Mining
Analyzing the Gut Microbiome
Adaptive Deep Learning Systems Towards Edge Intelligence
Accelerating Rendering Power
Computation + Machine Intelligence | Wu Tsai Institute
Computational Modeling of Biological Systems
Social Capital and Economic Mobility
MIT Brain and Cognitive Sciences
Building for Floods
Better Pathogen Targeting
Tracking Environmental Health Risks
AI for Cancer Diagnosis
Microplastic-Free by Design
Supporting Data-intensive Social Science
Sailing the Symbiosis Seascape
Wrangle Range Modeling
Shining a Light on Dark Matter
Grid Responsive Data Centers
Multifunctional 3D-Printed Materials
AI Pareidolia
Computing Hidden Health Threats from Heat
Staving off the Banana Apocalypse
CRISPR Mice, Smarter Science
Naval and Ocean Renewable Energy Hydrodynamics
AI That Speaks Human About Health
A Safer Way to See Inside Cells
How Monkeys - and Machines - See in 3D
FlowER: AI for Predicting Chemical Reactions
Supercomputers Reveal Ancient Atmospheric Battle
OSN - Open Storage Network
Massachusetts AI Hub
MGHPCC AI Computing Resource (AICR)
YARD: A Curation Workflow Tool
All Research Projects

Collaborative projects

ALL Collaborative PROJECTS

OUTREACH & EDUCATION PROJECTS

See ALL Scholarships
100 Bigelow Street, Holyoke, MA 01040