The following boilerplate may be used in the Facilities section of NSF/NIH/etc funding proposals as needed. Make sure only to include sections that describe services you intend to use.
The UC San Diego data communications network is a high-speed network with a redundant 10GE backbone serving building switches at 10Gbps and 1Gbps. The UCSD network connects more than 200 buildings and includes more than 900 edge switches and 65,000 ports. All desktop ports are 1Gbps capable, with 10Gbps connections (dedicated if necessary) available to research applications on request. The UC San Diego network is IPv6-capable, with IPv6 available on request for individual VLANs and routed to the Internet.
The campus also provides ubiquitous 802.11n wireless service with over 3500 wireless access points, secured with WPA2-Enterprise using 802.1x for access control.
UC San Diego is redundantly connected to the California Research and Education
Colocation services provided by the San Diego Supercomputer Center are designed to be a cost-effective hosting solution for researcher purchased computer and storage equipment leveraging historical UC investments and economies-of-scale. The environmentally controlled data centers span 19,000 square feet and have a total power capacity of 13 Megawatts. Interior and exterior security systems include two-factor authentication to both the host building and data centers as well as a 120-camera digital security system. Operations staff is available 24/7/365 to provide remote hands assistance, monitor critical and customer systems, facility oversight, and quick response to any data center event. Emergency power systems, Uninterruptible Power Supply (UPS) and diesel
The TSCC is a medium-scale, high performance, parallel computing cluster using the latest processor and interconnect (networking) technologies. Currently offered configurations comprise nodes with the latest generation Intel server class processors (Xeon) with 28 computing cores and 128 gigabytes of main memory. Two interconnect technologies are available: Infiniband for low latency parallel computing and 10 Gigabit Ethernet (GbE) as a more cost-effective alternative for computing workloads less sensitive to latency. Computing nodes with Graphics Processing Unit (GPU) accelerators are also available. A wide suite of research software is offered; researchers may also install and run software tools of their choosing. All researchers using TSCC have access to a high capacity parallel file system and external, high bandwidth research networks such as CENIC and Internet2. Vendor contracts are negotiated to provide for annual technology insertion as newer processors and other components become available at competitive prices.
TSCC is operated under a hybrid business model, which includes researcher-contributed (condo) computing nodes and pre-purchased computing time on a shared (hotel) portion of the system.
In the condo portion of the system, researchers purchase computing nodes using funds from grants or other sources and contribute the nodes to the cluster. In exchange for an annual operating fee, the researcher-owned nodes are located in an energy-efficient data center at the San Diego Supercomputer Center (SDSC) and maintained by professional system administrators. Researchers may compute exclusively on a number of nodes equal to that