C H A P T E R  6

Configuring Sun StorageTek QFS in a Sun Cluster Environment

This chapter describes how the Sun StorageTek QFS software works in a Sun Cluster environment. It also provides configuration examples for a Sun StorageTek QFS shared file system in a Sun Cluster environment and for an unshared Sun StorageTek QFS file system in a Sun Cluster environment.

This chapter contains the following sections:


Before You Begin

With versions 4U2 and later of the Sun StorageTek QFS software, you can install a Sun StorageTek QFS file system in a Sun Cluster environment and configure the file system for high availability. The configuration method you use varies, depending on whether your file system is shared or unshared.

This chapter assumes that you are an experienced user of both the Sun StorageTek QFS software and the Sun Cluster environment. It also assumes you have performed either or both of the following:

It is recommended that you read the following documentation before continuing with this chapter:

Local Disks

Global Devices

Device ID (DID)

Disk Device Groups

Disk Device Group Failover

Local and Global Namespaces

Cluster File Systems

HAStoragePlus Resource Type

Volume Managers



Note - The File System Manager software can also be used to control file systems in Sun Cluster environments. It recognizes and identifies cluster nodes and automatically prompts you to add other cluster nodes when adding a server. You have the option of creating non-archiving highly available (HA) shared or stand-alone Sun StorageTek QFS file systems on nodes within a Sun Cluster configuration. See the File System Manager online Help for more information.




Restrictions

The following restrictions apply to the Sun StorageTek QFS software in a Sun Cluster environment:



Note - Failback is not supported as a feature of the SUNW.qfs agent.





Note - Although installing a Sun StorageTek QFS file system in a Sun Cluster environment improves reliability and decreases or eliminates unplanned downtime, it does not eliminate planned downtime. In order to maintain the health of the file system, the Sun StorageTek QFS software may need to be brought down occasionally to run the samfsck process. It also needs to be shut down in order to apply software patches or updates.




How the Sun Cluster System and the Sun StorageTek QFS Software Interact

The shared file system uses Sun Cluster disk identifier (DID) support to enable data access by the Sun Cluster data service for Oracle Real Application Clusters. The unshared file system uses global device volume support and volume manager-controlled volume support to enable data access by failover applications supported by the Sun Cluster system.

Data Access With a Shared File System

With DID support, each device that is under the control of the Sun Cluster system, whether it is multipathed or not, is assigned a unique DID. For every unique DID device, there is a corresponding global device. The Sun StorageTek QFS shared file system can be configured on redundant storage that consists only of DID devices (/dev/did/*), where DID devices are accessible only on nodes that have a direct connection to the device through a host bus adapter (HBA).

Configuring the Sun StorageTek QFS shared file system on DID devices and configuring the SUNW.qfs resource type for use with the file system makes the file system's shared metadata server highly available. The Sun Cluster data service for Oracle Real Application Clusters can then access data from within the file system. Additionally, the Sun StorageTek QFS Sun Cluster agent can then automatically relocate the metadata server for the file system as necessary.



Note - Beginning with version 4U6 of the Sun StorageTek QFS software you can also have shared clients outside of the cluster in a Sun Cluster environment. For complete configuration instructions, see Configuring Shared Clients Outside the Cluster.



Data Access With an Unshared File System

A global device is the Sun Cluster system's mechanism for accessing an underlying DID device from any node within the Sun Cluster system, assuming that the nodes hosting the DID device are available. Global devices and volume manager-controlled volumes can be made accessible from every node in the Sun Cluster system. The unshared Sun StorageTek QFS file system can be configured on redundant storage that consists of either raw global devices (/dev/global/*) or volume manager-controlled volumes.

Configuring the unshared file system on these global devices or volume manager-controlled devices and configuring the HAStoragePlus resource type for use with the file system makes the file system highly available with the ability to fail over to other nodes.


Sun StorageTek QFS Support for Solaris Volume Manager for Sun Cluster

In the 4U4 release of Sun StorageTek QFS, support was added for Solaris Volume Manager for Sun Cluster, which is an extension to Solaristrademark Volume Manager that is bundled with the Solaris 9 and Solaris 10 OS releases. Sun StorageTek QFS only supports Solaris Volume Manager for Sun Cluster on Solaris 10.

Sun StorageTek QFS support for Solaris Volume Manager for Sun Cluster was introduced to take advantage of shared Sun StorageTek QFS host-based mirroring as well as Oracle's implementation for application binary recovery (ABR) and direct mirror reads (DMR) for Oracle RAC-based applications.

Use of Solaris Volume Manager for Sun Cluster with Sun StorageTek QFS requires Sun Cluster software and an additional unbundled software package included with the Sun Cluster software.

With this addition of Solaris Volume Manager for Sun Cluster support, four new mount options were introduced. These mount options are only available if Sun StorageTek QFS detects that it is configured on Solaris Volume Manager for Sun Cluster. The mount options are:

The following is a configuration example for using Sun StorageTek QFS with Solaris Volume Manager for Sun Cluster.

In this example below, it is assumed the following configuration has been done:

In this example there are three shared Sun StorageTek QFS file systems:


procedure icon  To Configure a File System With Solaris Volume Manager for Sun Cluster

1. Create the metadb on each node.

For example:


# metadb -a -f -c3 /dev/rdsk/c0t0d0s7


2. Create the disk group on one node.

For example:


# metaset -s datadg -M -a -h   scNode-A scNode-B

3. Run scdidadm to obtain devices on one node.

For example:


scNode-A # scdidadm -l
13	scNode-A:/dev/rdsk/c6t600C0FF00000000000332B62CF3A6B00d0 /dev/did/rdsk/d13
14	scNode-A:/dev/rdsk/c6t600C0FF0000000000876E950F1FD9600d0 /dev/did/rdsk/d14
15	scNode-A:/dev/rdsk/c6t600C0FF0000000000876E9124FAF9C00d0 /dev/did/rdsk/d15
16	scNode-A:/dev/rdsk/c6t600C0FF00000000000332B28488B5700d0 /dev/did/rdsk/d16
17	scNode-A:/dev/rdsk/c6t600C0FF000000000086DB474EC5DE900d0 /dev/did/rdsk/d17
18	scNode-A:/dev/rdsk/c6t600C0FF0000000000876E975EDA6A000d0 /dev/did/rdsk/d18
19	scNode-A:/dev/rdsk/c6t600C0FF000000000086DB47E331ACF00d0 /dev/did/rdsk/d19
20	scNode-A:/dev/rdsk/c6t600C0FF0000000000876E9780ECA8100d0 /dev/did/rdsk/d20
21	scNode-A:/dev/rdsk/c6t600C0FF000000000004CAD5B68A7A100d0 /dev/did/rdsk/d21
22	scNode-A:/dev/rdsk/c6t600C0FF000000000086DB43CF85DA800d0 /dev/did/rdsk/d22
23	scNode-A:/dev/rdsk/c6t600C0FF000000000004CAD7CC3CDE500d0 /dev/did/rdsk/d23
24	scNode-A:/dev/rdsk/c6t600C0FF000000000086DB4259B272300d0 /dev/did/rdsk/d24
25	scNode-A:/dev/rdsk/c6t600C0FF00000000000332B21D0B90000d0 /dev/did/rdsk/d25
26	scNode-A:/dev/rdsk/c6t600C0FF000000000004CAD139A855500d0 /dev/did/rdsk/d26
27	scNode-A:/dev/rdsk/c6t600C0FF00000000000332B057D2FF100d0 /dev/did/rdsk/d27
28	scNode-A:/dev/rdsk/c6t600C0FF000000000004CAD4C40941C00d0 /dev/did/rdsk/d28

The mirroring scheme is as follows:

21 <-> 13
14 <-> 17
23 <-> 16
15 <-> 19

4. Add devices to the set on one node.

For example:


# metaset -s datadg -a /dev/did/rdsk/d21 /dev/did/rdsk/d13 /dev/did/rdsk/d14 \
/dev/did/rdsk/d17 /dev/did/rdsk/d23 /dev/did/rdsk/d16 /dev/did/rdsk/d15 \
/dev/did/rdsk/d19

5. Create the mirrors on one node.

For example:


metainit -s datadg d10 1 1 /dev/did/dsk/d21s0
metainit -s datadg d11 1 1 /dev/did/dsk/d13s0
metainit -s datadg d1 -m d10
metattach -s datadg d11 d1
metainit -s datadg d20 1 1 /dev/did/dsk/d14s0
metainit -s datadg d21 1 1 /dev/did/dsk/d17s0
metainit -s datadg d2 -m d20
metattach -s datadg d21 d2
metainit -s datadg d30 1 1 /dev/did/dsk/d23s0
metainit -s datadg d31 1 1 /dev/did/dsk/d16s0
metainit -s datadg d3 -m d30
metattach -s datadg d31 d3
metainit -s datadg d40 1 1 /dev/did/dsk/d15s0
metainit -s datadg d41 1 1 /dev/did/dsk/d19s0
metainit -s datadg d4 -m d40
metattach -s datadg d41 d4
metainit -s datadg d51 -p d1 10m
metainit -s datadg d52 -p d1 200m
metainit -s datadg d53 -p d1 800m
metainit -s datadg d61 -p d2 10m
metainit -s datadg d62 -p d2 200m
metainit -s datadg d63 -p d2 800m
metainit -s datadg d71 -p d1 500m
metainit -s datadg d72 -p d1 65g
 
metainit -s datadg d81 -p d2 500m
metainit -s datadg d82 -p d2 65g

6. Perform the Sun StorageTek QFS installation on each node.

For example:


pkgadd -d . SUNWqfsr SUNWqfsu

7. Create the mcf file on each node.

For example:


/etc/opt/SUNWsamfs/mcf file:

#
# File system Data
#
Data 2 ma Data on shared
/dev/md/datadg/dsk/d53 20 mm Data on
/dev/md/datadg/dsk/d63 21 mm Data on
/dev/md/datadg/dsk/d3 22 mr Data on
/dev/md/datadg/dsk/d4 23 mr Data on
#

# File system Crs
#
Crs 4 ma Crs on shared
/dev/md/datadg/dsk/d51 40 mm Crs on
/dev/md/datadg/dsk/d61 41 mm Crs on
/dev/md/datadg/dsk/d52 42 mr Crs on
/dev/md/datadg/dsk/d62 43 mr Crs on
#

# File system Redo
#
Redo 6 ma Redo on shared
/dev/md/datadg/dsk/d71 60 mm Redo on
/dev/md/datadg/dsk/d81 61 mm Redo on
/dev/md/datadg/dsk/d72 62 mr Redo on
/dev/md/datadg/dsk/d82 63 mr Redo on


8. Create the file system hosts files.

For example:


/etc/opt/SUNWsamfs/hosts.Data
/etc/opt/SUNWsamfs/hosts.Crs
/etc/opt/SUNWsamfs/hosts.Oracle

# scNode-A:root> /usr/cluster/bin/scconf -p |egrep "Cluster node name:|Node private hostname:"
Cluster node name: scNode-A
Node private hostname: clusternode1-priv
Cluster node name: scNode-B
Node private hostname: clusternode2-priv

# Host Host IP Server Not MDS Server
# Name Address Priority Used Host
#-------- ------------------ --------- ---- ----------
scNode-A clusternode1-priv 1 - server
scNode-B clusternode2-priv 2 -


9. Create the /etc/opt/SUNWsamfs/samfs.cmd file.

For example:


fs = Data
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300

fs = Crs
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300

fs = Redo
stripe=1
sync_meta=1
mh_write
qwrite
forcedirectio
notrace
rdlease=300
wrlease=300
aplease=300


10. Create the Sun StorageTek QFS file systems. See the Sun StorageTek QFS Installation and Upgrade Guide for more information.

For example:


/opt/SUNWsamfs/sbin/sammkfs -S <filesystem>

11. Configure the resource group in Sun Cluster to manage failover of the Sun StorageTek QFS metadata server.

a. Build and append the /etc/vfstab mount entries.

For example:


#
#
# RAC on shared QFS
Data    -       /cluster/Data    samfs   -       no      shared,notrace
Redo    -       /cluster/Redo     samfs   -       no      shared,notrace
Crs    -       /cluster/Crs    samfs   -       no      shared,notrace

b. Mount the file systems across the cluster on each node.

First, mount the shared Sun StorageTek QFS file systems on the current metadata server, and then mount the file system on each metadata client.

To verify this step, type:
# df -h -F samfs

c. Create the Sun Cluster resource group to manage the metadata server.

Register the QFS resource type:
# scrgadm -a -t SUNW.qfs

Add the resource group with the Sun Cluster and shared Sun StorageTek QFS metadata nodes:
# scrgadm -a -g sc-QFS-rg -h scNode-A,sc-Node-B -y
RG_DEPENDENCIES="rac-framework-rg"

Add the shared Sun StorageTek QFS file system resource and the SUNWqfs resource type to the resource group:
# scrgadm -a -g sc-QFS-rg -t SUNW.qfs -j sc-qfs-fs-rs -x QFSFileSystem=/cluster/Data, \
/cluster/Redo,/cluster/Crs

Bring the resource group online:
# scswitch -Z -g sc-QFS-rg

The shared Sun StorageTek QFS file system is now ready to use.


About Configuration Examples

This chapter provides configuration examples for the Sun StorageTek QFS shared file system on a Sun Cluster system and for the unshared Sun StorageTek QFS file system on a Sun Cluster system. All configuration examples are based on a platform consisting of the following:

All configurations in this chapter are also based on CODE EXAMPLE 6-1. In this code example, the scdidadm(1M) command displays the DID devices, and the -L option lists the DID device paths, including those on all nodes in the Sun Cluster system.


CODE EXAMPLE 6-1 Command That Lists the DID Devices and Their DID Device Paths
# scdidadm -L
1   scnode-A:/dev/dsk/c0t0d0    /dev/did/dsk/d1
2   scnode-A:/dev/dsk/c0t1d0    /dev/did/dsk/d2
3   scnode-A:/dev/dsk/c0t6d0    /dev/did/dsk/d3
4   scnode-A:/dev/dsk/c6t1d0    /dev/did/dsk/d4
4   scnode-B:/dev/dsk/c7t1d0    /dev/did/dsk/d4
5   scnode-A:/dev/dsk/c6t2d0    /dev/did/dsk/d5
5   scnode-B:/dev/dsk/c7t2d0    /dev/did/dsk/d5
6   scnode-A:/dev/dsk/c6t3d0    /dev/did/dsk/d6
6   scnode-B:/dev/dsk/c7t3d0    /dev/did/dsk/d6
7   scnode-A:/dev/dsk/c6t4d0    /dev/did/dsk/d7
7   scnode-B:/dev/dsk/c7t4d0    /dev/did/dsk/d7
8   scnode-A:/dev/dsk/c6t5d0    /dev/did/dsk/d8    
8   scnode-B:/dev/dsk/c7t5d0    /dev/did/dsk/d8
9   scnode-B:/dev/dsk/c0t6d0    /dev/did/dsk/d9    
10  scnode-B:/dev/dsk/c1t0d0    /dev/did/dsk/d10
11  scnode-B:/dev/dsk/c1t1d0    /dev/did/dsk/d11

CODE EXAMPLE 6-1 shows that DID devices d4 through d8 are accessible from both Sun Cluster systems (scnode-A and scnode-B). With the Sun StorageTek QFS file system sizing requirements and with knowledge of your intended application and configuration, you can decide on the most appropriate apportioning of devices to file systems. By using the Solaris format(1M) command, you can determine the sizing and partition layout of each DID device and resize the partitions on each DID device, if needed. Given the available DID devices, you can also configure multiple devices and their associated partitions to contain the file systems, according to your sizing requirements.


Configuring a Sun StorageTek QFS Shared File System in a Sun Cluster Environment

When you install a Sun StorageTek QFS shared file system in a Sun Cluster environment, you configure the file system's metadata server under the SUNW.qfs resource type. This makes the metadata server highly available and enables the Sun StorageTek QFS shared file system to be globally accessible on all configured nodes in the Sun Cluster environment.

A Sun StorageTek QFS shared file system is typically associated with a scalable application. The Sun StorageTek QFS shared file system is mounted on, and the scalable application is active on, one or more Sun Cluster nodes.

If a node in the Sun Cluster system fails, or if you switch over the resource group, the metadata server resource (Sun StorageTek QFS Sun Cluster agent) automatically relocates the file system's metadata server as necessary. This ensures that the other nodes' access to the shared file system is not affected.



Note - To manually relocate the metadata server for a Sun StorageTek QFS shared file system that is under control of the Sun Cluster system, you must use the Sun Cluster administrative commands. For more information about these commands, see the Sun Cluster documentation.



Metadata Server Resource Considerations

When the Sun Cluster system boots, the metadata server resource ensures that the file system is mounted on all nodes that are part of the resource group. However, the file system mount on those nodes is not monitored. Therefore, in certain failure cases, the file system might be unavailable on certain nodes, even if the metadata server resource is in the online state.

If you use Sun Cluster administrative commands to bring the metadata server resource group offline, the file system under the metadata server resource remains mounted on the nodes. To unmount the file system (with the exception of a node that is shut down), you must bring the metadata server resource group into the unmanaged state by using the appropriate Sun Cluster administrative command.

To remount the file system at a later time, you must bring the resource group into a managed state and then into an online state.

Example Configuration

This section shows an example of the Sun StorageTek QFS shared file system installed on raw DID devices with the Sun Cluster data service for Oracle Real Application Clusters. For detailed information on how to use the Sun StorageTek QFS shared file system with the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

As shown in CODE EXAMPLE 6-1, DID devices d4 through d8 are highly available and are contained on controller-based storage. For you to configure a Sun StorageTek QFS shared file system in a Sun Cluster environment, the controller-based storage must support device redundancy by using RAID-1 or RAID-5.

For simplicity in this example, two file systems are created:

Additionally, device d4 is used for Sun StorageTek QFS metadata. This device has two 50-gigabyte slices. The remaining devices, d5 through d8, are used for Sun StorageTek QFS file data.

This configuration involves five main steps, as detailed in the following subsections:

1. Preparing to create Sun StorageTek QFS file systems

2. Creating the file systems and configuring the Sun Cluster nodes

3. Validating the configuration

4. Configuring the network name service

5. Configuring the Sun Cluster data service for Oracle Real Application Clusters


procedure icon  To Prepare to Create Sun StorageTek QFS Shared File Systems

1. From one node in the Sun Cluster system, use the format(1M) utility to lay out partitions on /dev/did/dsk/d4 (CODE EXAMPLE 6-2).

In this example, the action is performed from node scnode-A.


CODE EXAMPLE 6-2 Laying Out Partitions on /dev/did/dsk/d4
# format /dev/did/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (unnamed):
Total disk cylinders available: 12800 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       1 -  6400       50.00GB    (6400/0/0)  104857600
  1        usr    wm    6401 - 12800       50.00GB    (6400/0/0)  104857600
  2     backup    wu       0 - 12800      100.00GB    (6400/0/0)  209715200
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1M) by default.

Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 50-gigabyte partition. Partition 1 is configured to be the same size as partition 0.

2. On the same node, use the format(1M) utility to lay out partitions on /dev/did/dsk/d5 (CODE EXAMPLE 6-3).


CODE EXAMPLE 6-3 Laying Out Partitions on /dev/did/dsk/d5
# format /dev/did/rdsk/d5s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (unnamed):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
 
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       1 - 34529      269.77GB    (34529/0/0)  565723136
  1        usr    wm       0 - 0            0         (0/0/0)      
  2     backup    wu       0 - 34529      269.77GB    (34530/0/0)  565739520
  3 unassigned    wu       0                0         (0/0/0)              0
  4 unassigned    wu       0                0         (0/0/0)              0
  5 unassigned    wu       0                0         (0/0/0)              0
  6 unassigned    wu       0                0         (0/0/0)              0
  7 unassigned    wu       0                0         (0/0/0)              0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1M) by default.

3. Still on the same node, replicate the device d5 partitioning to devices d6 through d8.

This example shows the command for device d6:


# prtvtoc /dev/did/rdsk/d5s2 | fmthard -s - /dev/did/rdsk/d6s2

4. On all nodes that are potential hosts of the file systems, perform the following:

a. Configure the six partitions into two Sun StorageTek QFS shared file systems by adding two new configuration entries (qfs1 and qfs2) to the mcf file (CODE EXAMPLE 6-4).


CODE EXAMPLE 6-4 Adding Configuration Entries to the mcf File
# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
#
# Sun StorageTek QFS file system configurations
#
# Equipment				   Equipment	     Equipment	   Family	    Device   Additional
# Identifier				    Ordinal	      Type	         Set	      State   Parameters
# ------------------ ---------    ---------    -------    ------  ----------
qfs1				     100		         ma		          qfs1         -       shared
/dev/did/dsk/d4s0				     101		         mm		          qfs1         -
/dev/did/dsk/d5s0				     102         mr		          qfs1         -
/dev/did/dsk/d6s0				     103		         mr		          qfs1         -
 
qfs2				     200		         ma		          qfs2         -       shared
/dev/did/dsk/d4s1				     201		         mm		          qfs2         -
/dev/did/dsk/d7s0				     202         mr		          qfs2         -
/dev/did/dsk/d8s0				     203		         mr		          qfs2         -
 
EOF

For more information about the mcf file, see Function of the mcf File or the Sun StorageTek QFS Installation and Upgrade Guide.

b. Edit the /etc/opt/SUNWsamfs/samfs.cmd file to add the mount options that are required for the Sun Cluster data service for Oracle Real Application Clusters (CODE EXAMPLE 6-5).


CODE EXAMPLE 6-5 Example samfs.cmd File
fs = qfs2
   stripe = 1
   sync_meta = 1
   mh_write
   qwrite
   forcedirectio
   rdlease = 300

For more information about the mount options that are required by the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

c. Validate that the configuration is correct.

Be sure to perform this validation after you have configured the mcf file and the samfs.cmd file on each node.


# /opt/SUNWsamfs/sbin/sam-fsd 


procedure icon  To Create the Sun StorageTek QFS Shared File System and Configure Sun Cluster Nodes

Perform this procedure for each file system you are creating. This example describes how to create the qfs1 file system.

1. Obtain the Sun Cluster private interconnect names by using the following command:


# /usr/cluster/bin/scconf -p |egrep "Cluster node name:|Node private \hostname:"
Cluster node name:                                 scnode-A
  Node private hostname:                           clusternode1-priv
Cluster node name:                                 scnode-B
  Node private hostname:                           clusternode2-priv

2. On each node that is a potential host of the file system, do the following:

a. Use the samd(1M) config command, which signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available:


# samd config

b. Create the Sun StorageTek QFS shared hosts file for the file system (/etc/opt/SUNWsamfs/hosts.family-set-name), based on the Sun Cluster system's private interconnect names that you obtained in Step 1.

3. Edit the unique Sun StorageTek QFS shared file system's host configuration file with the Sun Cluster system's interconnect names (CODE EXAMPLE 6-6).

For Sun Cluster software failover and fencing operations, the Sun StorageTek QFS shared file system must use the same interconnect names as the Sun Cluster system.


CODE EXAMPLE 6-6 Editing Each File System's Host Configuration File
# cat > hosts.qfs1 <<EOF
# File  /etc/opt/SUNWsamfs/hosts.qfs1
# Host          Host IP                                 Server   Not  Server
# Name          Addresses                               Priority Used Host
# ------------- --------------------------------------- -------- ---- ----
scnode-A        clusternode1-priv                         1        -    server
scnode-B        clusternode2-priv                         2        -
 
EOF

4. From one node in the Sun Cluster system, use the sammkfs(1M) -S command to create the Sun StorageTek QFS shared file system:


# sammkfs -S qfs1 < /dev/null

5. On each node that is a potential host of the file system, do the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:


# mkdir /global/qfs1
# chmod 755 /global/qfs1
# chown root:other /global/qfs1

b. Add the Sun StorageTek QFS shared file system entry to the /etc/vfstab file:


# cat >> /etc/vfstab <<EOF
# device       device       mount      FS      fsck    mount      mount
# to mount     to fsck      point      type    pass    at boot    options
#
qfs1             -     /global/qfs1    samfs    -       no        shared
EOF


procedure icon  To Validate the Configuration

Perform this procedure for each file system you create. This example describes how to validate the configuration for file system qfs1.

1. If you do not know which node is acting as the metadata server for the file system, use the samsharefs(1M) -R command.

In CODE EXAMPLE 6-7 the metadata server for qfs1 is scnode-A.


CODE EXAMPLE 6-7 Determining Which Node Is the Metadata Server
# samsharefs -R qfs1#
# Host file for family set 'qfs1'
#
# Version: 4    Generation: 1    Count: 2
# Server = host 1/scnode-A, length = 165
#
scnode-A clusternode2-priv 1 - server
scnode-B clusternode2-priv 2 -

2. Use the mount(1M) command to mount the file system first on the metadata server and then on each node in the Sun Cluster system.



Note - It is important that you mount the file system on the metadata server first.




# mount qfs1
# ls /global/qfs1
lost+found/

3. Validate voluntary failover by issuing the samsharefs(1M) -s command, which changes the Sun StorageTek QFS shared file system between nodes:


# samsharefs -s scnode-B qfs1
# ls /global/qfs1
lost+found/
# samsharefs -s scnode-A qfs1
# ls /global/qfs1
lost+found

4. Validate that the required Sun Cluster resource type is added to the resource configuration:


# scrgadm -p | egrep "SUNW.qfs"

5. If you cannot find the Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the resource configuration:


# scrgadm -a -t SUNW.qfs

6. Register and configure the SUNW.qfs resource type:


# scrgadm -a -g qfs-rg -h scnode-A,scnode-B
# scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \
	   -x QFSFileSystem=/global/qfs1,/global/qfs2

7. Use the scswitch(1M) -Z -g command to bring the resource group online:


# scswitch -Z -g qfs-rg

8. Ensure that the resource group is functional on all configured nodes:


# scswitch -z -g qfs-rg -h scnode-B
# scswitch -z -g qfs-rg -h scnode-A


procedure icon  To Configure the Sun Cluster Data Service for Oracle Real Application Clusters

This section provides an example of how to configure the data service for Oracle Real Application Clusters for use with Sun StorageTek QFS shared file systems. For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

1. Install the data service as described in the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.

2. Mount the Sun StorageTek QFS shared file systems.

3. Set the correct ownership and permissions on the file systems so that the Oracle database operations are successful:


# chown oracle:dba /global/qfs1 /global/qfs2
# chmod 755 /global/qfs1 /global/qfs2

4. As the oracle user, create the subdirectories that are required for the Oracle Real Application Clusters installation and database files:


$ id
uid=120(oracle) gid=520(dba)
$ mkdir /global/qfs1/oracle_install
$ mkdir /global/qfs2/oracle_db

The Oracle Real Application Clusters installation uses the /global/qfs1/oracle_install directory path as the value for the ORACLE_HOME environment variable that is used in Oracle operations. The Oracle Real Application Clusters database files' path is prefixed with the /global/qfs2/oracle_db directory path.

5. Install the Oracle Real Application Clusters software.

During the installation, provide the path for the installation defined in Step 4 (/global/qfs1/oracle_install).

6. Create the Oracle Real Application Clusters database.

During database creation, specify that you want the database files located in the qfs2 shared file system.

7. If you are automating the startup and shutdown of Oracle Real Application Clusters database instances, ensure that the required dependencies for resource groups and resources are set.

For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.



Note - If you plan to automate the startup and shutdown of Oracle Real Application Clusters database instances, you must use Sun Cluster software version 3.1 9/04 or a compatible version.





Note - In shared Sun StorageTek QFS configurations that are being used for Oracle RAC 10g configurations, when the Oracle installer for Cluster Ready Services (CRS) prompts the user to execute root.sh this command fails in some instances. In other instances, when an Oracle Cluster Registry (OCR) file is created by root.sh, it makes the CRS registry unstable.

The workaround is to preallocate the OCR file to be larger than 700416. For example, preallocate a one-megabyte file, as user oracle, before running root.sh, as shown here:

$ dd if=/dev/zero of=<OCR file path> bs=1024k count=1M




Configuring an Unshared File System in a Sun Cluster Environment

When you install the unshared Sun StorageTek QFS file system on a Sun Cluster system, you configure the file system for high availability (HA) under the Sun Cluster HAStoragePlus resource type. An unshared Sun StorageTek QFS file system in a Sun Cluster system is typically associated with one or more failover applications, such as highly available network file server (HA-NFS) or highly available ORACLE (HA-ORACLE). Both the unshared Sun StorageTek QFS file system and the failover applications are active in a single resource group; the resource group is active on one Sun Cluster node at a time.

An unshared Sun StorageTek QFS file system is mounted on a single node at any given time. If the Sun Cluster fault monitor detects an error, or if you switch over the resource group, the unshared Sun StorageTek QFS file system and its associated HA applications fail over to another node, depending on how the resource group has been previously configured.

Any file system contained on a Sun Cluster global device group (/dev/global/*) can be used with the HAStoragePlus resource type. When a file system is configured with the HAStoragePlus resource type, it becomes part of a Sun Cluster resource group and the file system under Sun Cluster Resource Group Manager (RGM) control is mounted locally on the node where the resource group is active. When the RGM causes a resource group switchover or fails over to another configured Sun Cluster node, the unshared Sun StorageTek QFS file system is unmounted from the current node and remounted on the new node.

Each unshared Sun StorageTek QFS file system requires a minimum of two raw disk partitions or volume manager-controlled volumes (Solstice DiskSuitetrademark/Solaris Volume Manager or VERITAS Volume Manager), one for Sun StorageTek QFS metadata (inodes) and one for Sun StorageTek QFS file data. Configuring multiple partitions or volumes across multiple disks through multiple data paths increases unshared Sun StorageTek QFS file system performance. For information about sizing metadata and file data partitions, see Design Basics.

This section provides three examples of Sun Cluster system configurations using the unshared Sun StorageTek QFS file system. In these examples, a file system is configured in combination with an HA-NFS file mount point on the following:

For simplicity in all of these configurations, ten percent of each file system is used for Sun StorageTek QFS metadata, and the remaining space is used for Sun StorageTek QFS file data. For information about sizing and disk layout considerations, see the Sun StorageTek QFS Installation and Upgrade Guide.

Example 1: HA-NFS on Raw Global Devices

This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on raw global devices. For this configuration, the raw global devices must be contained on controller-based storage. This controller-based storage must support device redundancy through RAID-1 or RAID-5.

As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. The HAStoragePlus resource type requires the use of global devices, so each DID device (/dev/did/dsk/dx) is accessible as a global device by using the following syntax: /dev/global/dsk/dx.

The main steps in this example are as follows:

1. Prepare to create an unshared file system.

2. Create the file system and configure the Sun Cluster nodes.

3. Configure the network name service and the IP Measurement Protocol (IPMP) validation testing.

4. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Prepare to Create an Unshared Sun StorageTek QFS File System

1. Use the format(1M) utility to lay out the partitions on /dev/global/dsk/d4:


# format /dev/global/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (original):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
Part      Tag     Flag      Cylinders         Size            Blocks
 0   unassigned    wm       1 -  3543        20.76GB    (3543/0/0)   43536384
 1   unassigned    wm    3544 - 34529       181.56GB    (30986/0/0) 380755968
 2   backup        wu       0 - 34529       202.32GB    (34530/0/0) 424304640
 3   unassigned    wu       0                 0         (0/0/0)             0
 4   unassigned    wu       0                 0         (0/0/0)             0
 5   unassigned    wu       0                 0         (0/0/0)             0
 6   unassigned    wu       0                 0         (0/0/0)             0
 7   unassigned    wu       0                 0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1m) by default.

Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.

2. Replicate the global device d4 partitioning to global devices d5 through d7.

This example shows the command for global device d5:


# prtvtoc /dev/global/rdsk/d4s2 | fmthard \
-s - /dev/global/rdsk/d5s2

3. On all nodes that are potential hosts of the file system, perform the following:

a. Configure the eight partitions (four global devices, with two partitions each) into a Sun StorageTek QFS file system by adding a new file system entry to the mcf file.


# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
 
#
# Sun StorageTek QFS file system configurations
#
# Equipment	   	       Equipment	    Equipment	  Family	    Device  Additional
# Identifier	   	      Ordinal	       Type	       Set	      State   Parameters
# --------------     ---------    ---------   -------  ------  -----------
qfsnfs1		             100	           ma	     qfsnfs1     on
/dev/global/dsk/d4s0		   101	           mm	     qfsnfs1
/dev/global/dsk/d5s0		   102	           mm	     qfsnfs1
/dev/global/dsk/d6s0		   103	           mm	     qfsnfs1
/dev/global/dsk/d7s0		   104	           mm	     qfsnfs1
/dev/global/dsk/d4s1		   105	           mr	     qfsnfs1
/dev/global/dsk/d5s1		   106	           mr	     qfsnfs1
/dev/global/dsk/d6s1		   107	           mr	     qfsnfs1
/dev/global/dsk/d7s1		   108	           mr	     qfsnfs1
EOF

For information about the mcf file, see Function of the mcf File.

b. Validate that the configuration information you added to the mcf file is correct, and fix any errors in the mcf file before proceeding.

It is important to complete this step before you configure the Sun StorageTek QFS file system under the HAStoragePlus resource type.


# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  To Create the Sun StorageTek QFS File System and Configure Sun Cluster Nodes

1. On each node that is a potential host of the file system, issue the samd(1M) config command.

This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.


# samd config

2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:


# sammkfs qfsnfs1 < /dev/null

3. On each node that is a potential host of the file system, do the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:


# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.


# cat >> /etc/vfstab <<EOF
 
# device      device         mount       FS       fsck     mount      mount
# to mount    to fsck        point       type     pass     at boot    options
#
qfsnfs1         -       /global/qfsnfs1    samfs       2         no       sync_meta=1
EOF

c. Validate the configuration by mounting and unmounting the file system:


# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1

4. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:


# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

5. If you cannot find a required Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the configuration:


# scrgadm -a -t SUNW.HAStoragePlus

# scrgadm -a -t SUNW.LogicalHostname

# scrgadm -a -t SUNW.nfs



procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

This section provides an example of how to configure the network name service and the IPMP Validation Testing for your Sun Cluster nodes. For more information, see the Sun Cluster Software Installation Guide for Solaris OS, the System Administration Guide: IP Services, and the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).

1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster system and files for node names.

Perform this step before you configure the Network Information Name service (NIS) server.


# cat /etc/nsswitch.conf 
#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf; it 
# uses NIS (YP) in conjunction with files.
#
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:    files nis
group:     files nis
 
# Cluster s/w and local /etc/hosts file take precedence over NIS
hosts:    cluster files nis [NOTFOUND=return]
ipnodes:  files
# Uncomment the following line and comment out the above to resolve
# both IPv4 and IPv6 addresses from the ipnodes databases. Note that
# IPv4 addresses are searched in all of the ipnodes databases before 
# searching the hosts databases. Before turning this option on, consult
# the Network Administration Guide for more details on using IPv6.
# ipnodes: nis [NOTFOUND=return] files
 
networks: nis[NOTFOUND=return] files
protocols: nis [NOTFOUND=return] files
rpc: nis[NOTFOUND=return] files 
ethers: nis[NOTFOUND=return] files
netmaks: nis[NOTFOUND=return] files
bootparams: nis[NOTFOUND=return] files
publickey: nis[NOTFOUND=return] files
 
netgroup: nis
 
automount: files nis
aliases: files nis
[remainder of file content not shown]

2. Verify that the changes you made to the /etc/nsswitch.conf are correct:


# grep `^hosts:' /etc/nsswitch.conf
hosts:    cluster files nis [NOTFOUND=return]
#

3. Set up IPMP validation testing using available network adapters.

The adapters qfe2 and qfe3 are used as examples.

a. Statically configure the IPMP test address for each adapter:


#cat >> /etc/hosts << EOF

#
# Test addresses for scnode-A
#
192.168.2.2      `uname -n'-qfe2
192.168.2.3      `uname -n'-qfe2-test
192.168.3.2      `uname -n'-qfe3
192.168.3.3      `uname -n'-qfe3-test
#
# Test addresses for scnode-B
#
192.168.2.4      `uname -n'-qfe2
192.168.2.5      `uname -n'-qfe2-test
192.168.3.4      `uname -n'-qfe3
192.168.3.5      `uname -n'-qfe3-test
EOF

b. Dynamically configure the IPMP adapters:


# ifconfig qfe2 plumb `uname -n'-qfe2-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up
# ifconfig qfe2 addif `uname -n'-qfe2 up
# ifconfig qfe3 plumb `uname -n'-qfe3-test netmask + broadcast + deprecated \
	-failover -standby group ipmp0 up
# ifconfig qfe3 addif `uname -n'-qfe3 up

c. Verify the configuration:


# cat > /etc/hostname.qfe2 << EOF
`uname -n'-qfe2-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n'-qfe2 up
EOF
 
# cat > /etc/hostname.qfe3 << EOF
`uname -n'-qfe3-test netmask + broadcast + deprecated -failover -standby \
	group ipmp0 up addif `uname -n'-qfe3 up
EOF


procedure icon  To Configure HA-NFS and the Sun StorageTek QFS File System for High Availability

This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.

1. Create the NFS share point for the Sun StorageTek QFS file system.

Note that the share point is contained within the /global file system, not within the Sun StorageTek QFS file system.


# mkdir -p /global/nfs/SUNW.nfs
# echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res

2. Create the NFS resource group:


# scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs

3. Add the NFS logical host to the /etc/hosts table, using the address for your site:


# cat >> /etc/hosts << EOF
#
# IP Addresses for LogicalHostnames
#
192.168.2.10     lh-qfs1
EOF

4. Use the scrgadm(1M) -a -L -g command to add the logical host to the NFS resource group:


# scrgadm -a -L -g nfs-rg -l lh-nfs1

5. Use the scrgadm(1M) -c -g command to configure the HAStoragePlus resource type:


# scrgadm -c -g nfs-rg -h scnode-A,scnode-B 
# scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \
	-x FilesystemMountPoints=/global/qfsnfs1 \
	-x FilesystemCheckCommand=/bin/true

6. Bring the resource group online:


# scswitch -Z -g nfs-rg

7. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:


# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res

8. Bring the NFS resource online:


# scswitch -e -j nfs1-res

The NFS resource /net/lh-nfs1/global/qfsnfs1 is now fully configured and is also highly available.

9. Before announcing the availability of the highly available NFS file system on the Sun StorageTek QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:


# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B
# scswitch -F -g nfs-rg
# scswitch -Z -g nfs-rg

Example 2: HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager

This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on volumes controlled by Solstice DiskSuite/Solaris Volume Manager software. With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5 volumes. Typically, Solaris Volume Manager is used only when the underlying controller-based storage is not redundant.

As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. Solaris Volume Manager requires that DID devices be used to populate the raw devices from which Solaris Volume Manager can configure volumes. Solaris Volume Manager creates globally accessible disk groups, which can then be used by the HAStoragePlus resource type for creating Sun StorageTek QFS file systems.

This example follows these steps:

1. Prepare the Solstice DiskSuite/Solaris Volume Manager software.

2. Prepare to create an unshared file system.

3. Create the file system and configure the Sun Cluster nodes.

4. Configure the network name service and the IPMP validation testing.

5. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Prepare the Solstice DiskSuite/Solaris Volume Manager Software

1. Determine whether a Solaris Volume Manager metadatabase (metadb) is already configured on each node that is a potential host of the Sun StorageTek QFS file system:


# metadb
        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c1t0d0s7
     a    p  luo        16              8192            /dev/dsk/c2t0d0s7

If the metadb(1M) command does not return a metadatabase configuration, then on each node, create three or more database replicas on one or more local disks. Each replica must be at least 16 megabytes in size. For more information about creating the metadatabase configuration, see the Sun Cluster Software Installation Guide for Solaris OS.

2. Create an HA-NFS disk group to contain all Solaris Volume Manager volumes for this Sun StorageTek QFS file system:


# metaset -s nfsdg -a -h scnode-A scnode-B

3. Add DID devices d4 through d7 to the pool of raw devices from which Solaris Volume Manager can create volumes:


# metaset -s nfsdg -a /dev/did/dsk/d4 /dev/did/dsk/d5 \
	/dev/did/dsk/d6 /dev/did/dsk/d7 


procedure icon  To Prepare For a Sun StorageTek QFS File System

1. Use the format(1M) utility to lay out partitions on /dev/global/dsk/d4:


# format /dev/global/rdsk/d4s2
# format> partition
[ output deleted ]
# partition> print
Current partition table (original):
Total disk cylinders available: 34530 + 2 (reserved cylinders)
Part      Tag     Flag      Cylinders         Size            Blocks
 0   unassigned    wm       1 -  3543        20.76GB    (3543/0/0)   43536384
 1   unassigned    wm    3544 - 34529       181.56GB    (30986/0/0) 380755968
 2   backup        wu       0 - 34529       202.32GB    (34530/0/0) 424304640
 3   unassigned    wu       0                 0         (0/0/0)             0
 4   unassigned    wu       0                 0         (0/0/0)             0
 5   unassigned    wu       0                 0         (0/0/0)             0
 6   unassigned    wu       0                 0         (0/0/0)             0
 7   unassigned    wu       0                 0         (0/0/0)             0
 
NOTE: Partition 2 (backup) will not be used and was created by format(1m) by default.

This example shows that partition or slice 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.

2. Replicate the partitioning of DID device d4 to DID devices d5 through d7.

This example shows the command for device d5:


# prtvtoc /dev/global/rdsk/d4s2 | fmthard \
-s - /dev/global/rdsk/d5s2

3. Configure the eight partitions (four DID devices, two partitions each) into two RAID-1 (mirrored) Sun StorageTek QFS metadata volumes and two RAID-5 (parity-striped) Sun StorageTek QFS file data volumes:

a. Combine partition (slice) 0 of these four drives into two RAID-1 sets:


# metainit -s nfsdg -f d1 1 1 /dev/did/dsk/d4s0
# metainit -s nfsdg -f d2 1 1  /dev/did/dsk/d5s0
# metainit -s nfsdg d10 -m d1 d2
# metainit -s nfsdg -f d3 1 1 /dev/did/dsk/d6s0
# metainit -s nfsdg -f d4 1 1  /dev/did/dsk/d7s0
# metainit -s nfsdg d11 -m d3 d4

b. Combine partition 1 of these four drives into two RAID-5 sets:


# metainit -s nfsdg d20 -p /dev/did/dsk/d4s1 205848574b
# metainit -s nfsdg d21 -p /dev/did/dsk/d5s1 205848574b
# metainit -s nfsdg d22 -p /dev/did/dsk/d6s1 205848574b
# metainit -s nfsdg d23 -p /dev/did/dsk/d7s1 205848574b
# metainit -s nfsdg d30 -r d20 d21 d22 d23

c. On each node that is a potential host of the file system, add the Sun StorageTek QFS file system entry to the mcf file:


# cat >> /etc/opt/SUNWsamfs/mcf <<EOF
 
# Sun StorageTek QFS file system configurations
#
# Equipment	           	Equipment   	Equipment	   Family	   Device    Additional
# Identifier		            Ordinal	     Type	         Set	   State     Parameters
# ------------------- ---------    ---------  -------   ------    ----------
qfsnfs1                  		100	         ma      qfsnfs1     		on
/dev/md/nfsdg/dsk/d10    		101	         mm      qfsnfs1
/dev/md/nfsdg/dsk/d11    		102	         mm      qfsnfs1
/dev/md/nfsdg/dsk/d30    		103	         mr      qfsnfs1
EOF

For more information about the mcf file, see Function of the mcf File.

4. Validate that the mcf(4) configuration is correct on each node, and fix any errors in the mcf file before proceeding.


# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  To Create the Sun StorageTek QFS File System and Configure Sun Cluster Nodes

1. On each node that is a potential host of the file system, use the samd(1M) config command.

This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.


# samd config

2. Enable Solaris Volume Manager mediation detection of disk groups, which assists the Sun Cluster system in the detection of drive errors:


# metaset -s nfsdg -a -m scnode-A
# metaset -s nfsdg -a -m scnode-B

3. On each node that is a potential host of the file system, ensure that the NFS disk group exists:


# metaset -s nfsdg -t

4. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:


# sammkfs qfsnfs1 < /dev/null

5. On each node that is a potential host of the file system, do the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:


# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.


# cat >> /etc/vfstab << EOF
# device       device       mount      FS      fsck    mount      mount
# to mount     to fsck      point      type    pass    at boot    options
#
qfsnfs1         -    /global/qfsnfs1   samfs    2       no      sync_meta=1
EOF

c. Validate the configuration by mounting and unmounting the file system.

Perform this step one node at a time. In this example, the qfsnfs1 file system is mounted and unmounted on one node.


# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1



Note - When testing the mount point, use the metaset -r (release) and -t (take) command to move the nfsdg disk group between Sun Cluster nodes. Then use the samd(1M) config command to alert the daemon of the configuration changes.



6. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:


# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

7. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:


# scrgadm -a -t SUNW.HAStoragePlus
# scrgadm -a -t SUNW.LogicalHostname
# scrgadm -a -t SUNW.nfs


procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing


procedure icon  To Configure HA-NFS and the Sun StorageTek QFS File System for High Availability

This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.

1. Create the NFS share point for the Sun StorageTek QFS file system.

Note that the share point is contained within the /global file system, not within the Sun StorageTek QFS file system.


# mkdir -p /global/nfs/SUNW.nfs
# echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res

2. Create the NFS resource group:


# scrgadm -a -g nfs-rg -y PathPrefix=/global/nfs

3. Add a logical host to the NFS resource group:


# scrgadm -a -L -g nfs-rg -l lh-nfs1

4. Configure the HAStoragePlus resource type:


# scrgadm -c -g nfs-rg -h scnode-A,scnode-B 
# scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \
	-x FilesystemMountPoints=/global/qfsnfs1 \
	-x FilesystemCheckCommand=/bin/true

5. Bring the resource group online:


# scswitch -Z -g nfs-rg

6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:


# scrgadm -a -g nfs-rg -j nfs1-res -t SUNW.nfs -y \ Resource_dependencies=qfsnfs1-res

7. Use the scswitch(1M) -e -j command to bring the NFS resource online:


# scswitch -e -j nfs1-res

The NFS resource /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.

8. Before you announce the availability of the highly available NFS file system on the Sun StorageTek QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:


# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B
# scswitch -F -g nfs-rg
# scswitch -Z -g nfs-rg

Example 3: HA-NFS on VxVM Volumes

This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on VERITAS Volume Manager-controlled volumes (VxVM volumes). With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5. Typically, VxVM is used only when the underlying storage is not redundant.

As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. VxVM requires that shared DID devices be used to populate the raw devices from which VxVM configures volumes. VxVM creates highly available disk groups by registering the disk groups as Sun Cluster device groups. These disk groups are not globally accessible, but can be failed over, making them accessible to at least one node. The disk groups can be used by the HAStoragePlus resource type.



Note - The VxVM packages are separate, additional packages that must be installed, patched, and licensed. For information about installing VxVM, see the VxVM Volume Manager documentation.



To use Sun StorageTek QFS software with VxVM, you must install the following VxVM packages:

This example follows these steps:

1. Configure the VxVM software.

2. Prepare to create an unshared file system.

3. Create the file system and configure the Sun Cluster nodes.

4. Validate the configuration.

5. Configure the network name service and the IPMP validation testing.

6. Configure HA-NFS and configure the file system for high availability.


procedure icon  To Configure the VxVM Software

This section provides an example of how to configure the VxVM software for use with the Sun StorageTek QFS software. For more detailed information about the VxVM software, see the VxVM documentation.

1. Determine the status of dynamic multipathing (DMP) for VERITAS.


# vxdmpadm listctlr all

2. Use the scdidadm(1M) utility to determine the HBA controller number of the physical devices to be used by VxVM.

As shown in the following example, the multi-node accessible storage is available from scnode-A using HBA controller c6, and from node scnode-B using controller c7:


# scdidadm -L
[ some output deleted]
4   scnode-A:/dev/dsk/c6t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4
4   scnode-B:/dev/dsk/c7t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4

3. Use VxVM to configure all available storage as seen through controller c6:


# vxdmpadm getsubpaths ctlr=c6

4. Place all of this controller's devices under VxVM control:


# vxdiskadd fabric_

5. Create a disk group, create volumes, and then start the new disk group:


# /usr/sbin/vxdg init qfs-dg qfs-dg00=disk0 \
qfsdg01=disk1 qfsdg02=disk2 qfsdg03=disk3

6. Ensure that the previously started disk group is active on this system:


# vxdg import nfsdg
# vxdg free

7. Configure two mirrored volumes for Sun StorageTek QFS metadata and two volumes for Sun StorageTek QFS file data volumes.

These mirroring operations are performed as background processes, given the length of time they take to complete.


# vxassist -g nfsdg make m1 10607001b
# vxassist -g nfsdg mirror m1&
# vxassist -g nfsdg make m2 10607001b
# vxassist -g nfsdg mirror m2&
# vxassist -g nfsdg make m10 201529000b
# vxassist -g nfsdg mirror m10&
# vxassist -g nfsdg make m11 201529000b
# vxassist -g nfsdg mirror m11&

8. Configure the previously created VxVM disk group as a Sun Cluster-controlled disk group:


# scconf -a -D type=vxvm,name=nfsdg,nodelist=scnode-A:scnode-B


procedure icon  To Prepare to Create a Sun StorageTek QFS File System

Perform this procedure on each node that is a potential host of the file system.

1. Add the Sun StorageTek QFS file system entry to the mcf file.


CODE EXAMPLE 6-8 Addition of the File System to the mcf File
# cat >> /etc/opt/SUNWsamfs/mcf   <<EOF
# Sun StorageTek QFS file system configurations
#
# Equipment	             	Equipment	  Equipment  	Family	     Device    Additional
# Identifier		            Ordinal	    Type	       Set	        State     Parameters
# ------------------    --------    ---------  -------    ------    ----------
qfsnfs1                   100        ma        qfsnfs1     on
/dev/vx/dsk/nfsdg/m1      101        mm        qfsnfs1
/dev/vx/dsk/nfsdg/m2      102        mm        qfsnfs1
/dev/vx/dsk/nfsdg/m10     103        mr        qfsnfs1
/dev/vx/dsk/nfsdg/m11     104        mr        qfsnds1
EOF

For more information about the mcf file, see Function of the mcf File.

2. Validate that the mcf(4) configuration is correct, and correct any errors in the mcf file before proceeding:


# /opt/SUNWsamfs/sbin/sam-fsd


procedure icon  To Create the Sun StorageTek QFS File System and Configure Sun Cluster Nodes

1. On each node that is a potential host of the file system, use the samd(1M) config command.

This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.


# samd config

2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:


# sammkfs qfsnfs1 < /dev/null

3. On each node that is a potential host of the file system, do the following:

a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:


# mkdir /global/qfsnfs1
# chmod 755 /global/qfsnfs1
# chown root:other /global/qfsnfs1

b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.

Note that the mount options field contains the sync_meta=1 value.


# cat >> /etc/vfstab << EOF
# device       device       mount       FS      fsck    mount      mount
# to mount     to fsck      point       type    pass    at boot    options
# 
qfsnfs1           -    /global/qfsnfs1  samfs    2        no      sync_meta=1
EOF


procedure icon  To Validate the Configuration

1. Validate that all nodes that are potential hosts of the file system are configured correctly.

To do this, move the disk group that you created in To Configure the VxVM Software to the node, and mount and then unmount the file system. Perform this validation one node at a time.


# scswitch -z -D nfsdg -h scnode-B
# mount qfsnfs1
# ls /global/qfsnfs1
lost+found/
# umount qfsnfs1

2. Ensure that the required Sun Cluster resource types have been added to the resource configuration:


# scrgadm -p | egrep "SUNW.HAStoragePlus|SUNW.LogicalHostname|SUNW.nfs"

3. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:


# scrgadm -a -t SUNW.HAStoragePlus# scrgadm -a -t SUNW.LogicalHostname# scrgadm -a -t SUNW.nfs


procedure icon  To Configure the Network Name Service and the IPMP Validation Testing

To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing


procedure icon  To Configure HA-NFS and the Sun StorageTek QFS File System for High Availability

To configure HA-NFS and the file system for high availability, follow the instructions in To Configure HA-NFS and the Sun StorageTek QFS File System for High Availability.


Configuring Shared Clients Outside the Cluster

If you are configuring a Sun Cluster environment and would like to have shared clients that are outside of the cluster, perform the following configurations.

The example below is based on a two-node metadata server cluster configuration.

Configuration Prerequisites

The following items must be configured or verified in order to set up shared clients outside the cluster:

Sun StorageTek QFS Metadata Server Sun Cluster Nodes

The following requirements must be met for the Sun StorageTek QFS metadata server Sun Cluster nodes:

Sun StorageTek QFS Metadata Client Nodes

The following requirements must be met for Sun StorageTek QFS metadata client nodes:

Sun Cluster Device Configuration

The localonly flag must be set on all data devices. Setting the local mode for data devices via the /etc/opt/SUNWsamfs/mcf file identifies which devices are to be used as Sun StorageTek QFS data devices.

Perform the following as root on any nodes running under Sun Cluster:

scconf -r -D name=dsk/dX,nodelist=node2

scconf -c -D name=dsk/dX,localonly=true

Requirements for Configuring Clients Outside the Cluster

Due to the complexity of a configuration that includes both Sun Cluster and Shared Sun StorageTek QFS clients, a separate private network is mandatory for Sun StorageTek QFS metadata traffic. In addition, the following should also be true:

Minimum Software Release Levels

The following minimum software release levels are required:

Hardware Architecture Supported

The following hardware architectures are supported:



Note - Mixed architectures are not supported.



Storage Requirements

The shared storage configuration needs to include hardware-level mirroring with RAID5 support. Servers and clients should use the Sun StorageTek Traffic Manager (MPxIO) configuration, and only shared storage is supported.

Configuration Instructions

The following examples use a configuration consisting of three SPARC Sun Cluster nodes that are identified as follows:

ctelab30 MDS #SPARC Sun Cluster Node
ctelab31 MDS #
SPARC Sun Cluster Node
ctelab32 MDC #
SPARC QFS Client Node


procedure icon  To Edit the /etc/hosts File

single-step bulletAfter installation of the operating system, prepare the nodes by editing the /etc/hosts file on each node.

For example:


### SC Cluster Nodes ###
129.152.4.57 ctelab30 # Cluster Node
129.152.4.58 ctelab31 # Cluster Node
129.152.4.59 ctelab32 # QFS Client Node

### SC Logical ###
192.168.4.100 sc-qfs1

### QFS NET ### ##
## ctelab30
192.168.4.20 ctelab30-4
192.168.4.160 ctelab30-qfe1-test
192.168.4.210 ctelab30-qfe2-test

## ctelab31
192.168.4.21 ctelab31-4
192.168.4.161 ctelab31-qfe1-test
192.168.4.211 ctelab31-qfe2-test

## ctelab32
192.168.4.22 ctelab32-qfs

 


procedure icon  To Configure the Metadata Server Network

The following examples illustrate the setup process for the server network. These examples assume the following settings:

For this example the /etc/hosts, /etc/netmasks, /etc/nsswitch.conf, /etc/hostname.qfe1, and /etc/hostname.qfe2 files must be modified on each server cluster node, as follows:

1. Check the /etc/nsswitch.conf file.

For example:

hosts: cluster files dns nis

2. Append the following to the /etc/netmasks file:

192.168.4.0 255.255.255.0

3. Edit the /etc/hostname.qfe1 file so that it contains the following:

ctelab30-4 netmask + broadcast + group qfs_ipmp1 up addif ctelab30-qfe1-test deprecated -failover netmask + broadcast + up

4. Edit the /etc/hostname.qfe2 file so that it contains the following:

ctelab30-qfe2-test netmask + broadcast + deprecated group qfs_ipmp1 -failover standby up


procedure icon  To Configure the Metadata Client Network

The following examples illustrate the setup process for the client network. These examples assume the following settings:

For this example, the /etc/hosts, /etc/netmasks, /etc/nsswitch.conf, /etc/hostname.qfe1, and /etc/hostname.qfe2 must be modified on each metadata controller (MDC) node, as follows:

1. Check the /etc/nsswitch.conf file and modify as follows:

hosts: files dns nis

2. Append the following to the /etc/netmasks file:

192.168.4.0 255.255.255.0

3. Edit the /etc/hostname.qfe1 file to contain the following:

ctelab32-4


procedure icon  To Install and Configure Sun Cluster

After the operating system has been prepared and the nodes have the MPxIO multipathing software enabled, you can install and configure the Sun Cluster software as follows:

1. Install the Sun Cluster software, following the Sun Cluster documentation.

2. Identify shared storage devices to be used as quorum devices.

For example:

scdidadm -L
scconf -a -q globaldev=dx
scconf -c -q reset


procedure icon  To Configure the Sun StorageTek QFS Metadata Server

After the Sun Cluster software has been installed and the cluster configuration has been verified, you can install and configure the Sun StorageTek QFS MDS, as follows:

1. Install the Sun StorageTek QFS software by following the instructions in the Sun StorageTek QFS Installation and Upgrade Guide.

For example:

# pkgadd-d . SUNWqfsr SUNWqfsu

2. Using the Sun Cluster command scdidadm -L, identify the devices that will be used for the Sun StorageTek QFS configuration.

3. Edit the mcf file to reflect the file system devices.

For example:


# 
# File system Qfs1 # 
Qfs1	2  ma	Qfs1	on	shared 
/dev/did/dsk/d7s0  20  mm	Qfs1	on 
/dev/did/dsk/d8s0  21  mm	Qfs1	on 
/dev/did/dsk/d16s0 22  mr	Qfs1	on 
/dev/did/dsk/d10s0 23  mr	Qfs1	on 
/dev/did/dsk/d13s0 24  mr	Qfs1	on 
# 
# File system Qfs2 # 
Qfs2	5  ma	Qfs2	on	shared 
/dev/did/dsk/d9s0  50  mm	Qfs2	on 
/dev/did/dsk/d11s0 51  mm	Qfs2	on 
/dev/did/dsk/d17s0 52  mr	Qfs2	on
/dev/did/dsk/d12s0 53  mr	Qfs2	on 
/dev/did/dsk/d14s0 54  mr	Qfs2	on 
/dev/did/dsk/d15s0 55  mr	Qfs2	on 
/dev/did/dsk/d18s0 56  mr	Qfs2	on

4. Set local mode on the MDS Sun StorageTek QFS data devices.

For example, for the Qfs1 file system defined above, the following would be carried out for devices defined as mr devices:


#/usr/cluster/bin/scconf -r -D name=dsk/d16,nodelist=ctelab31 
#/usr/cluster/bin/scconf -c -D name=dsk/d16,localonly=true 
#/usr/cluster/bin/scconf -r -D name=dsk/d10,nodelist=ctelab31 
#/usr/cluster/bin/scconf -c -D name=dsk/d10,localonly=true 
#/usr/cluster/bin/scconf -r -D name=dsk/d13,nodelist=ctelab31 
#/usr/cluster/bin/scconf -c -D name=dsk/d13,localonly=true

5. Edit the /etc/opt/SUNWsamfs/defaults.conf file.

For example:


trace 
all = on 
sam-fsd.size = 10M 
sam-sharefsd.size = 10M 
endtrace

6. Build the Sun StorageTek QFS file system hosts files.

For information on the hosts files, see the Sun StorageTek QFS Installation and Upgrade Guide and Changing the Shared Hosts File.



Note - Since we communicate with the MDC outside the cluster, we need to establish Sun StorageTek QFS metadata traffic over the network. The MDC is not a member of the Sun Cluster configuration, so we use a logical host for this traffic. In this example configuration sc-qfs1 is this hostname.



To build the shared host table on the MDS, do the following:

a. Use the Sun Cluster scconf command to obtain the host order information. For example:

# /usr/cluster/bin/scconf -p | egrep Cluster node name: |Node private hostname:|Node ID:

b. Make note of the scconf command output. For example:


Cluster node name:			   ctelab30
Node ID:				   1 
Node private hostname:			   clusternode1-priv 
 
Cluster node name:			   ctelab31 
Node ID:				   2 
Node private hostname:			   clusternode2-priv

c. Create the shared hosts file.

For example, the /etc/opt/SUNWsamfs/hosts.Qfs1 file would contain the following:


# 
# MDS 
# Shared MDS Host file for family set 'Qfs1'
#
#
ctelab30 clusternode1-priv,sc-qfs1	1	-	server 
ctelab31 clusternode2-priv,sc-qfs1	2	- 
ctelab32 ctelab32-4	-	-

d. Create the local hosts file.

For example, the /etc/opt/SUNWsamfs/hosts.Qfs1.local file would contain the following:


# 
# MDS 
# Local MDS Host file for family set 'Qfs1' 
ctelab30    clusternode1-priv 
ctelab31    clusternode2-priv

7. Create the file system using the sammkfs command.

For example:

# /opt/SUNWsamfs/sbin/sammkfs -S Qfs1

8. Prepare the mount points on each cluster node.

For example:

# mkdir -p /cluster/qfs1 /cluster/qfs2

9. Append file system entries to the /etc/vfstab file.

For example:


### 
# QFS Filesystems 
### 
Qfs1 - /cluster/qfs1 samfs - no shared 
Qfs2 - /cluster/qfs2 samfs - no shared

10. Mount the file systems.

For example:

# mount Qfs1, mount Qfs2 no each cluster node

11. Create the Sun Cluster MDS resource group.

Carry out the following steps to create the MDS resource group under Sun Cluster:

a. Add the QFS Resource type.

For example:

# /usr/cluster/bin/scrgadm -a -t SUNW.qfs

b. Create the MDS resource group.

For example:


# /usr/cluster/bin/scrgadm -a -g sc-qfs-rg -h ctelab30,ctelab31
# /usr/cluster/bin/scrgadm -c -g sc-qfs-rg -y RG_description= Metadata Server + MDC Clients

c. Add the logical hostname to the resource group.

For example:


# /usr/cluster/bin/scrgadm -a -L -g sc-qfs-rg -l sc-qfs1 -n qfs_ipmp1@ctelab30,qfs_ipmp1@ctelab31
# /usr/cluster/bin/scrgadm -c -j sc-qfs1 -y RG_description=  Logical Hostname resource for sc-qfs1 

d. Add the Sun StorageTek QFS file system resource to the MDS resource group.

For example:


# /usr/cluster/bin/scrgadm -a -g sc-qfs-rg -t SUNW.qfs -j fs-qfs-rs -x \
# QFSFileSystem=/cluster/qfs1,/cluster/qfs2 -y Resource_dependencies=sc-qfs1

e. Bring the resource group online.

For example:

# /usr/cluster/bin/scswitch -Z -g sc-qfs-rg

f. Check the status.

For example:

# /usr/cluster/bin/scswitch


procedure icon  To Configure Sun StorageTek QFS Metadata Client

After the operating system has been installed on all metadata clients, you can proceed to Sun StorageTek QFS client installation and configuration.

Before carrying out these instructions, verify that MPxIO has been enabled and that the clients can access all disk devices.

1. Install the Sun StorageTek QFS software by following the instructions in the Sun StorageTek QFS Installation and Upgrade Guide.

For example:

# pkgadd-d . SUNWqfsr SUNWqfsu

2. Use the format command on the MDC and the Sun Cluster scdidadm -L command on the MDS to identify the devices that will be used for the Sun StorageTek QFS configuration.

3. Build the mcf files on the metadata clients.

For example:


# 
# File system Qfs1 
# 
Qfs1	   2  ma Qfs1 on shared 
nodev	  20  mm Qfs1 on 
nodev	  21  mm Qfs1 on 
/dev/dsk/c6t600C0FF00000000000332B21D0B90000d0s0 22  mr Qfs1 on 
/dev/dsk/c6t600C0FF0000000000876E9124FAF9C00d0s0 23  mr Qfs1 on 
/dev/dsk/c6t600C0FF000000000004CAD7CC3CDE500d0s0 24  mr Qfs1 on 
# 
# File system Qfs2 
# Qfs2 	   5 ma Qfs2 on shared 
nodev	  50  mm Qfs2 on 
nodev	  51  mm Qfs2 on 
/dev/dsk/c6t600C0FF00000000000332B057D2FF100d0s0 52  mr Qfs2 on 
/dev/dsk/c6t600C0FF0000000000876E975EDA6A000d0s0 53  mr Qfs2 on 
/dev/dsk/c6t600C0FF0000000000876E9780ECA8100d0s0 54  mr Qfs2 on 
/dev/dsk/c6t600C0FF000000000004CAD139A855500d0s0 55  mr Qfs2 on 
/dev/dsk/c6t600C0FF000000000004CAD4C40941C00d0s0 56  mr Qfs2 on 

4. Edit the /etc/opt/SUNWsamfs/defaults.conf file.

For example:


trace 
all = on 
sam-fsd.size = 10M 
sam-sharefsd.size = 10M 
endtrace

5. Build the Sun StorageTek QFS file system hosts files.

Use the information from the MDS hosts files and follow the examples below.



Note - For metadata communications between the MDS and the MDC, clients that are not members of the cluster must communicate over the logical host.



a. Create the shared hosts file.

For example, the /etc/opt/SUNWsamfs/hosts.Qfs1 file would contain the following:


# 
# MDC 
# Shared Client Host file for family set 'Qfs1' 
ctelab30   sc-qfs1	     1     -    server 
ctelab31   sc-qfs1	     2     - 
ctelab32   ctelab32-4	     -	    -

b. Create the local hosts file.

For example, the /etc/opt/SUNWsamfs/hosts.Qfs1.local file would contain the following:


# 
# MDC 
# Local Client Host file for family set 'Qfs1' 
ctelab30 sc-qfs1@ctelab32-4 
ctelab31 sc-qfs1@ctelab32-4



Note - The /etc/opt/SUNWsamfs/hosts.QFS1.local file is different for each client. In this example, the client is using its interface configured on ctelab32-4 to bind to host sc-qfs1 for metadata traffic.



6. Create the mount points on each cluster node.

For example:

# mkdir -p /cluster/qfs1 /cluster/qfs2

7. Edit the /etc/vfstab file.

For example:


### 
# QFS Filesystems 
### 
Qfs1 - /cluster/qfs1 samfs - yes bg,shared 
Qfs2 - /cluster/qfs2 samfs - yes bg,shared

8. Mount the file systems.

For example:

# mount Qfs1, mount Qfs2 no each MDC node


Changing the Sun StorageTek QFS Configuration

This section demonstrates how to make changes to, disable, or remove the Sun StorageTek QFS shared or unshared file system configuration in a Sun Cluster environment. It contains the following sections:


procedure icon  To Change the Shared File System Configuration

This example procedure is based on the example in Example Configuration.

1. Log in to each node as the oracle user, shut down the database instance, and stop the listener:


$ sqlplus "/as sysdba"
SQL > shutdown immediate
SQL > exit
$ lsnrctl stop listener 

2. Log in to the metadata server as superuser and bring the metadata server resource group into the unmanaged state:


# scswitch -F -g qfs-rg
# scswitch -u -g qfs-rg

At this point, the shared file systems are unmounted on all nodes. You can now apply any changes to the file systems' configuration, mount options, and so on. You can also re-create the file systems, if necessary. To use the file systems again after re-creating them, follow the steps in Example Configuration.

3. If you want to make changes to the metadata server resource group configuration or to the Sun StorageTek QFS software, remove the resource, the resource group, and the resource type, and verify that everything is removed.

For example, you might need to upgrade to new packages.


# scswitch -n -j qfs-res
# scswitch -r -j qfs-res
# scrgadm -r -g qfs-rg
# scrgadm -r -t SUNW.qfs
# scstat

At this point, you can re-create the resource group to define different names, node lists, and so on. You can also remove or upgrade the Sun StorageTek QFS shared software, if necessary. After the new software is installed, the metadata resource group and the resource can be re-created and can be brought online.


procedure icon  To Disable HA-NFS on a File System That Uses Raw Global Devices

Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using raw global devices. This example procedure is based on Example 1: HA-NFS on Raw Global Devices.

1. Use the scswitch(1M) -F -g command to take the resource group offline:


# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:


# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources:


# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group:


# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories:


# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed:


# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs


procedure icon  To Disable HA-NFS on a File System That Uses Solaris Volume Manager-Controlled Volumes

Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using Solstice DiskSuite/Solaris Volume Manager-controlled volumes. This example procedure is based on Example 2: HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager.

1. Take the resource group offline:


# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:


# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources:


# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group:


# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories:


# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed:


# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs

7. Delete RAID-5 and RAID-1 sets:


# metaclear -s nfsdg -f d30 d20 d21 d22 d23 d11 d1 d2 d3 d4

8. Remove mediation detection of drive errors:


# metaset -s nfsdg -d  -m scnode-A
# metaset -s nfsdg -d  -m scnode-B

9. Remove the shared DID devices from the nfsdg disk group:


# metaset -s nfsdg -d -f /dev/did/dsk/d4 /dev/did/dsk/d5 \
	/dev/did/dsk/d6 /dev/did/dsk/d7

10. Remove the configuration of disk group nfsdg across nodes in the Sun Cluster system:


# metaset -s  nfsdg -d -f -h scnode-A scnode-B

11. Delete the metadatabase, if it is no longer needed:


# metadb -d -f /dev/dsk/c0t0d0s7
# metadb -d -f /dev/dsk/c1t0d0s7
# metadb -d -f /dev/dsk/c2t0d0s7


procedure icon  To Disable HA-NFS on a Sun StorageTek QFS File System That Uses VxVM-Controlled Volumes

Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using VxVM-controlled volumes. This example procedure is based on Example 3: HA-NFS on VxVM Volumes.

1. Take the resource group offline:


# scswitch -F -g nfs-rg

2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:


# scswitch -n -j nfs1-res
# scswitch -n -j qfsnfs1-res
# scswitch -n -j lh-nfs1

3. Remove the previously configured resources:


# scrgadm -r -j nfs1-res
# scrgadm -r -j qfsnfs1-res
# scrgadm -r -j lh-nfs1

4. Remove the previously configured resource group:


# scrgadm -r -g nfs-rg

5. Clean up the NFS configuration directories:


# rm -fr /global/nfs

6. Disable the resource types used, if they were previously added and are no longer needed:


# scrgadm -r -t SUNW.HAStoragePlus
# scrgadm -r -t SUNW.LogicalHostname
# scrgadm -r -t SUNW.nfs

7. Delete the subdisk:


# vxdg destroy nfsdg

8. Remove the VxVM devices:


# vxdisk rm fabric_0 fabric_1 fabric_2 fabric_3 fabric_4


High-Availability Sun StorageTek SAM Configuration Using Sun Cluster

Sun StorageTek SAM can also be configured for high availability by using Sun Cluster software. By allowing other nodes in a cluster to automatically host the archiving workload when the primary node fails, Sun Cluster software can significantly reduce downtime and increase productivity.

High-availability SAM (HA-SAM) depends on the Sun StorageTek QFS Sun Cluster agent, so this configuration must be installed with a shared Sun StorageTek QFS file system that is mounted and managed by the Sun StorageTek QFS Sun Cluster agent.

For more information see the Sun StorageTek Storage Archive Manager Archive Configuration and Administration Guide.