TSCC Storage Backup Policy
There are different backup policies and characteristics on the three TSCC data storage systems. This page describes the complete policies in effect at this time. If your project requires greater capacity or reliability than provided in this policy, please contact TSCC User Support.
- Home Storage Area (Dual Copy Storage)
Home areas on TSCC are located on NFS servers using ZFS as the underlying file system. There are 80 terabytes of space in the home file system to be shared among all TSCC users.
- Local HD Redundancy: Double-drive failure can occur without data loss: TSCC uses the Raidz2 variant of Raid6
- Connectivity: 10GbE; Delivers > 300MB/sec to single node; > 500MB/sec aggregate
- Space: Each user is guaranteed at least 100GB of space
- Backup/Replication: A snapshot of each user's home area is taken every 24 hours and replicated to an identical server; snapshots are retained for at least 7 days (longer if space available); no other backup is performed; view the snapshot recovery page
- Total Space for files, snapshots: 80TB
- Location: Snapshots accessible at $HOME/.zfs/snapshot
- Lustre Storage Area (Single Copy Storage)
TSCC has a parallel file system (PFS) known as Data Oasis. It contains at least 200TB of shared scratch space to be shared among all TSCC users.
- Local HD Redundancy: Single-drive hardware failure is supported through Raid5 on the Lustre Object
- Connectivity: 4 x 10GbE; delivers > 500MB/sec to single node; > 2.5GB/sec aggregate
- Space: Each user is given access to this storage as part of their allocation; at this time, no minimum guarantee or maximum capacity is defined; if a project needs medium term (1-6 mo.) data availability, TSCC supports project-based special requests. Please email the TSCC Technical Contact for details.
- There is a default 90-day purge policy on files stored in Data Oasis (based on the file creation date)
- Backup/Replication: No backup of this storage is performed
- Total Space for files: 200TB minimum
- Location: Job-specific storage accessible at /oasis/tscc/scratch/<username>
- Local Node Temporary Storage
A small scratch space is available to all users. This comprises approximately 6GB per node of available storage only during job runs. It is purged between jobs.
- Local HD Redundancy: Single-drive hardware failure is supported with Linux SW Raid-1 (Mirroring)
- Connectivity: Local HD; about 50MB/sec/node; about 14GB/sec aggregate
- Space: Generally about 6GB/node; purged between jobs
- Backup/Replication: No backup of this storage is performed
- Total Space for files, snapshots: Dependent on number of nodes requested for job
- Location: Storage accessible at /tmp from local node only