Jump to: navigation, search
Achtung.png Legacy documentation

This page describes a service provided by a retired ACENET system. Most ACENET services are currently provided by national systems, for which please visit

Overview | Mahone | Placentia | Fundy | Glooscap

General Information

Description A parallel cluster at Saint Mary's University well suited to MPI work
Total nodes 134
Total cores 536
Node Model SunFire X4100, SunFire X2200
Architecture Three 8.0 GB/sec. HyperTransport link with 6.0 GB/sec. access between processor and memory
RAM Model DDR1/400 ECC registered DIMMs (128-bit plus ECC databus)
RAM/Node 16 GB
64 GB (cl064-cl079)
Sockets/Node 2
CPU AMD Opteron Processor 285 SE 2593 MHz (cl001-cl042)
AMD Opteron Processor 290 2792 MHz (cl043-cl063)
AMD Opteron Processor 2222 3015 MHz (cl064-cl141)
Cores/CPU 2
Interconnect Myrinet-2000 (MX-2G) (cl002-cl063), 224 cores

Myrinet-10G (MX-10G) (cl064-cl141), 312 cores

Operating System EL6

GPU Nvidia nodes

Please Mahone has four 4-core GPU equipped nodes. Please see the CUDA page for details on how to use them.

Myrinet switches and Open MPI

Nodes cl002 thru cl063 have a Myrinet-2000 switch; new nodes cl064 thru cl141 have a Myrinet-10G switch. Myrinet MPI jobs cannot span the two switch groups. To ensure that Grid Engine does not assign parallel jobs to hosts spanning the two switches, there are two parallel environments: ompi_2000 serving hosts connected to the Myrinet-2000 switch, and ompi for the Myrinet-10G hosts.

MPI users should use 'wild card' syntax in the parallel environment specification in order to qualify a job to run in whichever switch group has sufficient resources available. For example:

#$ -pe ompi* 16

This specifies a 16-slot Open MPI job which will run on either Myrinet-2000 or Myrinet-10G, depending on which resource is available. The command line analogue must quote or escape the '*' character to avoid shell globbing:

$ qsub -pe "ompi*" 16 scriptname

Overview | Mahone | Placentia | Fundy | Glooscap