The rfs.cfg file looks like this, with additional collectors defined after the end.
Name | Explanation |
[RFS] | Start of generic RFS parameters |
Version=4.0 | Version # |
LocalPath=/rfs/files/localpath | Where statistics and control files seem to sit |
CacheDir=/rfs/cache | Where to put parts of files temporarily |
SafetyPath=/rfs/safety | Secondary copy of metadata for user file changes not associated with current local collection. Failed copies go here too. |
LogFile=/rfs/logs/rfs.log | Log file name/location |
Debug=no | Always set this to no |
MaxCacheSpace=2000 | Max MB storage to use in cache New Min=1000, our RAMdisk=1000 New Max=1/2 System memory |
MaxSMFiles=64 | Max # SM connections at once for reading files. Not supposed to be >32 in a Unix system by old notes |
MaxSMWriters=4 | Max # SM connections at once for writing files If this is a write-only system we can crank this up |
FileCleanupTimeout=1440 | New default |
Flush=no | Flush buffers when OS ready, always < 2min |
.... | |
[EXCLUSIONS] | Don't copy these files |
.nfs* | |
[ALIASES] | SM system info |
SystemName=sth1 | Primary StorHouse system |
MirrorName= | None |
Database=RFS | Database name on StorHouse |
TableName=ARCHIVE | Base name, suffixes added for various tables |
UserId=SYSADM | username |
Password=_)=>Ol | password (encrypted) |
.... | |
[STATS] | Start of statistics definitions |
SystemName=sth1 | Primary StorHouse system to store statistics in |
MirrorName= | None |
StatsInterval=60 | Take stats every 60 seconds |
Database=RFS | Database name to use |
TableName=ARCHIVE | Base name, suffixes added (like .STATS_RFS) |
UserId=SYSADM | username |
Password=_)=>Ol | password (encrypted) |
FileType=TXT | store stats as .text file (can be xml, html too) |
.... | |
[COLLECTORS] | List of collectors |
E1998_COLR=E1998_COLN | 1998 data |
E1999_COLR=E1999_COLN | 1999 data |
E2000_COLR=E2000_COLN | 2000 data |
E1997_COLR=E1997_COLN | 1997 data |
ROOT=COLL1 | old all-data |
.......... | |
[sth1] | "System" for COLL1 |
DNSName=chuma.icecube.wisc.edu | Real StorHouse System to use |
STHName=sth1 | Name of StorHouse system |
MailRecipient=222_hardware@icecube | who to warn |
RetryInterval=3 | minutes to wait before retry |
UserId=RFS | StorHouse account ID for logging |
Password=^u= | password (encrypted) |
Group=RFS | Group for userID |
VSET=RFS | VolumeSet to write to. RFS is the name here |
FSET=RFS | FileSet to write to in the VSET. RFS is the name here |
FSETSegments=1 | Useful when writing multiple collections to same VSET Write to first, then second, etc. Better bandwidth |
Checkpoint=1800 | Max MB for writing StorHouse collection |
.... | |
[STOR1] | Define storage for COLL1 |
SystemName=sth1 | Point to "system" for COLL1 |
MirrorName= | None |
Database=RFS | Database name on StorHouse. We use the same one for all data |
TableName=ARCHIVE | Base table name, suffixes added |
UserId=SYSADM | username |
Password=_)=>Ol | password (encrypted) |
MaxSearchConnections=12 | Max # ODBC connections to StorHouse DB tables |
SearchConnectionTimeout=10 | minutes before closing idle DB connection |
.... | |
[COLL1] | Collection COLL1 definition |
Storage=STOR1 | Point to storage for COLL1 |
CollectionDir=/rfs/collections | where collection info/flagging goes |
MaxLoadInterval=480 | longest time to wait (min) between loads |
MaxWriteSize=1800 | largest size a collection can be before a new one starts (MB) |
Compression=no | Don't compress collections (worthless for our data) |
Retention=0 | Don't bother with retention |
.... | |
[ROOT] | Collector for COLL1 |
StagingDir=/rfs/collectors/root | Where directory tree for storage goes For some reason the collector directories go in the directory ABOVE it |
UserDir=/ | Subdirectory of StagingDir, also subdirectory of /RFS |
WaitTime=2 | Files 2 minutes idle can be collected |
KeepSubdirectories=1 | On UNIX omit or set to 0 |
MaxStagingSpace=150000 | 150GB for staging |
MaxCollectionSpace=150000 | 150GB for collections |
...... | |
[E1997_COLR] | Define Collector for 1997 data |
StagingDir=/rfs/collectors/E1997 | Where directory tree for storage goes For some reason the collector directories go in the directory ABOVE it |
UserDir=/EXP/1997 | Subdirectory of StagingDir, also subdirectory of /RFS |
WaitTime=2 | If file idle for 2min, collect it |
KeepSubdirectories=1 | # subdirectories below collector root that must exist. On UNIX omit or set=0 |
MaxStagingSpace=150000 | 150GB staging space |
MaxCollectionSpace=150000 | Allot 150GB collection space |
.... | |
[E1997_COLN] | Collection for 1997 data |
Storage=E1997_STORAGE | Points to storage for 1997 data |
CollectionDir=/rfs/collections/E1997 | Collections for 1997 go here |
MaxLoadInterval=480 | Longest time to wait (min) between loads |
MaxWriteSize=1800 | largest size (MB) a collection can be before a new one starts |
Compression=no | Don't compress collections |
Retention=0 | Don't worry about retention on RFS server |
.... | |
[E1997_STORAGE] | Define Storage for 1997 data |
SystemName=E1997_SYS | Point to "System" for 1997 data |
MirrorName= | None |
Database=RFS | Use same DB for all data: RFS |
TableName=ARCHIVE | Use same Table name for all data |
UserId=SYSADM | username |
Password=_)=>Ol | password (encrypted) |
MaxSearchConnections=12 | Max #=12 ODBC connections to StorHouse DB |
SearchConnectionTimeout=10 | minutes before closing idle DB connection |
.... | |
[E1997_SYS] | Definition of "System" for 1997 data |
DNSName=chuma.icecube.wisc.edu | Real StorHouse System |
STHName=sth1 | name of StorHouse system |
MailRecipient=james.bellinger@icecube | warn who |
RetryInterval=3 | Minutes to wait before retry |
UserId=RFS | User ID for logging |
Password=^u= | password (encrypted) |
Group=RFS | group for user |
VSET=E1997 | VolumeSet for 1997 data |
FSET=E1997 | FileSet within VSET for 1997 data |
FSETSegments=1 | Useful when writing multiple collections to same VSET Write to first, then second, etc. Better bandwidth |
Checkpoint=1800 | max MB for writing StorHouse collection |
...... |
Modified 26-May-2011 at 15:05
http://icecube.wisc.edu/~jbellinger/StorHouse/26May2011
Previous notes | Next notes | Main slide directory |