C3DDB Compute, Storage, and Networking Resources

The C3DDB compute cluster currently consists of 133 compute nodes with a total of 7200 cores 61 terabytes of main memory, an FDR infiniband interconnect, a 2 Pebibyte (2.24 Petabyte) Lustre file system, and a 54 Terabyte NAS.


  • 100 compute nodes, each consisting of a Dell PowerEdge R715 chassis containing:
    Four 16-Core AMD Abu Dhabi Opteron processors (AMD 6376) running at 2.3Ghz
    256GB main memory
    500GB scratch storage
  • 32 compute nodes, each consisting of a an HP DL580 chassis containing:
    Two 10-Core / 20 Thread Intel Ivy Bridge processors (Intel E7-4830v2) running at 2.2Ghz
    1 Terabyte main memory
    3 X 3 x nvidia K40c GPU accelerators
  • 12 compute nodes, each consisting of a Dell PowerEdge C4130 chassis containing:
    Two 14-Core / 28 Thread Intel Broadwell-EP processors (Intel E5-2680v4) running at 2.4Ghz
    128GB main memory
    1000GB scratch storage
  • 42 Intel 7000 chassis, each containing four Intel KNL 7210 compute nodes
    168 KNL 7210 nodes total
  • 1 large-memory compute node consisting of an SGI UV2000 containing:
    Twenty 8-core Intel Ivy Bridge Processors (Intel E5-4650 v2) running at 2.4GHz
    Four Terabytes of main memory
  • Master and gateway nodes to support login, management, and network access
  • License server
  • All of the nodes in the system run RHEL 6.5, and are managed by a SLURM resource manager.


  • 2.24 Petabytes (2 Pebibytes) of storage for scratch and project directories, managed by a high performance LUSTRE file system.
  • 54 Terabytes of NAS storage for home directories, supported by snapshot and backup services.
  • dbGap-compliant directories for sensitive data.


  • For ordinary use:
      • The C3DDB is accessible via SSH, SFTP, and SCP from anywhere on the public internet.
  • For high speed data transfers:
    • All computing systems at the MGHPCC, including the C3DDB, are accessible via 10Gbps links to the five MGHPCC member universities.
    • The C3DDB is also accessible via the Northern Crossroads node at the MGHPCC.
    • A Globus Connect Data Transfer Node is also available for those who need a highly optimized data transfer path.