The following boilerplate may be used in the Facilities section of NSF/NIH/etc funding proposals as needed. Make sure only to include sections that describe services you intend to use.
The UC San Diego data communications network is a high-speed network with a redundant 10GE backbone serving building switches at 10Gbps and 1Gbps. The UCSD network connects more than 200 buildings and includes more than 900 edge switches and 65,000 ports. All desktop ports are 1Gbps capable, with 10Gbps connections (dedicated if necessary) available to research applications on request. The UCSD network is IPv6-capable, with IPv6 available on request for individual VLANs and routed to the Internet.
The campus also provides ubiquitous 802.11n wireless service with over 3500 wireless access points, secured with WPA2-Enterprise using 802.1x for access control.
UCSD is redundantly connected to the California Research and Education network (CalREN) via multiple 10Gbps connections to the Corporation for Education Networking in California (CENIC), providing the campus with 50Gbps of connectivity to Internet2 and other research networks as well as 20Gbps connectivity to the commodity Internet. In addition, under NSF grant # ACI-1340964, UCSD researchers are able to make use of a layer-2, 100Gbps Internet connection to the CENIC network and beyond to the PacificWave regional network, Internet2 and ESnet national research networks and the ANI joint 100G national backbone, all at 100Gbps. An additional 10Gbps of connectivity to ESnet is available at the San Diego Supercomputer Center (SDSC).
Colocation services provided by the San Diego Supercomputer Center are designed to be a cost-effective hosting solution for researcher purchased computer and storage equipment leveraging historical UC investments and economies-of-scale. The environmentally controlled data centers span 19,000 square feet and have a total power capacity of 13 Megawatts. Interior and exterior security systems include two-factor biometric access to both the host building and data centers as well as a 120-camera digital security system. Operations staff is available 24/7/365 to provide remote hands assistance, monitor critical and customer systems, facility oversight, and quick response to any data center event. Emergency power systems, Uninterruptible Power Supply (UPS) and diesel generation are both available on site for research with uptime requirements or to maintain production environments. Seismic isolation systems rated for earthquakes of 7.0 or higher magnitude are installed on every rack to further protect equipment and data. The colocation facility is strategically positioned on the UCSD network to maximize diverse and robust connections to many research networks in addition to the commodity Internet.
The TSCC is a medium-scale, high performance, parallel cluster using the latest processor and interconnect (networking) technologies. Currently offered configurations comprise nodes with the latest generation Intel server class processors (Xeon) with 16 computing cores and either 64 gigabytes or 128 gigabytes (optional) of main memory. Two interconnect technologies are available: Infiniband for low latency parallel computing and 10 Gigabit Ethernet (GbE) as a more cost-effective alternative for computing workloads less sensitive to latency. Computing nodes with Graphics Processing Unit (GPU) accelerators are also available. A wide suite of research software is offered; researchers may also install and run software tools of their choosing. All researchers using TSCC have access to a high capacity parallel file system and external, high bandwidth research networks such as XSEDE and Internet2. Vendor contracts are negotiated to provide for annual technology insertion as newer processors and other components become available at competitive prices.
TSCC is operated under a hybrid business model, which includes researcher-contributed (condo) computing nodes and pre-purchased computing time on a shared (hotel) portion of the system.
In the condo portion of the system, researchers purchase computing nodes using funds from grants or other sources and contribute the nodes to the cluster. In exchange for an annual operating fee, the researcher-owned nodes are located in an energy-efficient data center at the San Diego Supercomputer Center (SDSC) and maintained by professional system administrators. Researchers may compute exclusively on a number of nodes equal to that purchased, or may use the entire cluster as a shared resource.
On the hotel portion of the cluster, scientists not desiring or able to participate in the condo, or requiring a small amount or short duration of computing, may purchase time on a shared partition of TSCC at a measured rate (per processor core per hour).