There are three kinds of compute nodes in the cluster: General Computing Nodes, GPU Nodes, and Petascale Data Analysis Facility (PDAF) Nodes. The current specifications for each type of node are as follows:
General Computing Nodes | |
---|---|
Processors | Dual-socket, 8-core, 2.6GHz Intel Xeon E5-2670 (Sandy Bridge) |
Memory | 64GB (4GB/core) (128GB memory optional) |
Network | 10GbE (QDR InfiniBand optional) |
Hard Drive | 500GB onboard (second hard drive or SSD optional) |
Warranty | 3-years |
GPU Nodes | |
---|---|
Host Processors | Dual-socket, 6-core, 2.3GHz Intel Xeon E5-2630 (Sandy Bridge) |
GPUs | 4 NVIDIA GeForce GTX 680 (GTX Titan upgrade available) |
Memory | 32GB (64GB/128GB memory optional) |
Network | 10GbE (QDR InfiniBand optional) |
Hard Drive | 500GB + 240GB SSD |
Warranty | 3-years |
PDAF (shared resource; pay-as-you-go only) | |
---|---|
Processors | 8-socket, 4-core AMD Shanghai Opteron |
Memory | 512 GB |
Network | 10 GbE |
(RCI will annually update the hardware choices for general computing and GPU condo purchasers, to stay abreast of technology/cost advances.)
TSCC nodes with the QDR InfiniBand (IB) option connect to 32–port IB switches, allowing up to 512 cores to communicate at full bisection bandwidth for low latency parallel computing.
TSCC users will receive 100GB of backed-up home file storage, and shared access to the 200+ TB Data Oasis Lustre-based high performance parallel file system. (There is a 90–day purge policy on Data Oasis, and this storage is not backed up.)
Additional persistent storage can be mounted from lab file servers over the campus network or purchased from SDSC.