C H A P T E R 6 |
Configuring Sun StorageTek QFS in a Sun Cluster Environment |
This chapter describes how the Sun StorageTek QFS software works in a Sun Cluster environment. It also provides configuration examples for a Sun StorageTek QFS shared file system in a Sun Cluster environment and for an unshared Sun StorageTek QFS file system in a Sun Cluster environment.
This chapter contains the following sections:
With versions 4U2 and later of the Sun StorageTek QFS software, you can install a Sun StorageTek QFS file system in a Sun Cluster environment and configure the file system for high availability. The configuration method you use varies, depending on whether your file system is shared or unshared.
This chapter assumes that you are an experienced user of both the Sun StorageTek QFS software and the Sun Cluster environment. It also assumes you have performed either or both of the following:
It is recommended that you read the following documentation before continuing with this chapter:
The following restrictions apply to the Sun StorageTek QFS software in a Sun Cluster environment:
Note - Failback is not supported as a feature of the SUNW.qfs agent. |
The shared file system uses Sun Cluster disk identifier (DID) support to enable data access by the Sun Cluster data service for Oracle Real Application Clusters. The unshared file system uses global device volume support and volume manager-controlled volume support to enable data access by failover applications supported by the Sun Cluster system.
With DID support, each device that is under the control of the Sun Cluster system, whether it is multipathed or not, is assigned a unique DID. For every unique DID device, there is a corresponding global device. The Sun StorageTek QFS shared file system can be configured on redundant storage that consists only of DID devices (/dev/did/*), where DID devices are accessible only on nodes that have a direct connection to the device through a host bus adapter (HBA).
Configuring the Sun StorageTek QFS shared file system on DID devices and configuring the SUNW.qfs resource type for use with the file system makes the file system's shared metadata server highly available. The Sun Cluster data service for Oracle Real Application Clusters can then access data from within the file system. Additionally, the Sun StorageTek QFS Sun Cluster agent can then automatically relocate the metadata server for the file system as necessary.
Note - Beginning with version 4U6 of the Sun StorageTek QFS software you can also have shared clients outside of the cluster in a Sun Cluster environment. For complete configuration instructions, see Configuring Shared Clients Outside the Cluster. |
A global device is the Sun Cluster system's mechanism for accessing an underlying DID device from any node within the Sun Cluster system, assuming that the nodes hosting the DID device are available. Global devices and volume manager-controlled volumes can be made accessible from every node in the Sun Cluster system. The unshared Sun StorageTek QFS file system can be configured on redundant storage that consists of either raw global devices (/dev/global/*) or volume manager-controlled volumes.
Configuring the unshared file system on these global devices or volume manager-controlled devices and configuring the HAStoragePlus resource type for use with the file system makes the file system highly available with the ability to fail over to other nodes.
In the 4U4 release of Sun StorageTek QFS, support was added for Solaris Volume Manager for Sun Cluster, which is an extension to Solaris Volume Manager that is bundled with the Solaris 9 and Solaris 10 OS releases. Sun StorageTek QFS only supports Solaris Volume Manager for Sun Cluster on Solaris 10.
Sun StorageTek QFS support for Solaris Volume Manager for Sun Cluster was introduced to take advantage of shared Sun StorageTek QFS host-based mirroring as well as Oracle's implementation for application binary recovery (ABR) and direct mirror reads (DMR) for Oracle RAC-based applications.
Use of Solaris Volume Manager for Sun Cluster with Sun StorageTek QFS requires Sun Cluster software and an additional unbundled software package included with the Sun Cluster software.
With this addition of Solaris Volume Manager for Sun Cluster support, four new mount options were introduced. These mount options are only available if Sun StorageTek QFS detects that it is configured on Solaris Volume Manager for Sun Cluster. The mount options are:
The following is a configuration example for using Sun StorageTek QFS with Solaris Volume Manager for Sun Cluster.
In this example below, it is assumed the following configuration has been done:
In this example there are three shared Sun StorageTek QFS file systems:
|
1. Create the metadb on each node.
2. Create the disk group on one node.
3. Run scdidadm to obtain devices on one node.
The mirroring scheme is as follows:
21 <-> 13
14 <-> 17
23 <-> 16
15 <-> 19
4. Add devices to the set on one node.
# metaset -s datadg -a /dev/did/rdsk/d21 /dev/did/rdsk/d13 /dev/did/rdsk/d14 \ /dev/did/rdsk/d17 /dev/did/rdsk/d23 /dev/did/rdsk/d16 /dev/did/rdsk/d15 \ /dev/did/rdsk/d19 |
5. Create the mirrors on one node.
6. Perform the Sun StorageTek QFS installation on each node.
7. Create the mcf file on each node.
8. Create the file system hosts files.
9. Create the /etc/opt/SUNWsamfs/samfs.cmd file.
10. Create the Sun StorageTek QFS file systems. See the Sun StorageTek QFS Installation and Upgrade Guide for more information.
11. Configure the resource group in Sun Cluster to manage failover of the Sun StorageTek QFS metadata server.
a. Build and append the /etc/vfstab mount entries.
# # # RAC on shared QFS Data - /cluster/Data samfs - no shared,notrace Redo - /cluster/Redo samfs - no shared,notrace Crs - /cluster/Crs samfs - no shared,notrace |
b. Mount the file systems across the cluster on each node.
First, mount the shared Sun StorageTek QFS file systems on the current metadata server, and then mount the file system on each metadata client.
To verify this step, type:
# df -h -F samfs
c. Create the Sun Cluster resource group to manage the metadata server.
Register the QFS resource type:
# scrgadm -a -t SUNW.qfs
Add the resource group with the Sun Cluster and shared Sun StorageTek QFS metadata nodes:
# scrgadm -a -g sc-QFS-rg -h scNode-A,sc-Node-B -y
RG_DEPENDENCIES="rac-framework-rg"
Add the shared Sun StorageTek QFS file system resource and the SUNWqfs resource type to the resource group:
# scrgadm -a -g sc-QFS-rg -t SUNW.qfs -j sc-qfs-fs-rs -x QFSFileSystem=/cluster/Data, \
/cluster/Redo,/cluster/Crs
Bring the resource group online:
# scswitch -Z -g sc-QFS-rg
The shared Sun StorageTek QFS file system is now ready to use.
This chapter provides configuration examples for the Sun StorageTek QFS shared file system on a Sun Cluster system and for the unshared Sun StorageTek QFS file system on a Sun Cluster system. All configuration examples are based on a platform consisting of the following:
All configurations in this chapter are also based on CODE EXAMPLE 6-1. In this code example, the scdidadm(1M) command displays the DID devices, and the -L option lists the DID device paths, including those on all nodes in the Sun Cluster system.
CODE EXAMPLE 6-1 shows that DID devices d4 through d8 are accessible from both Sun Cluster systems (scnode-A and scnode-B). With the Sun StorageTek QFS file system sizing requirements and with knowledge of your intended application and configuration, you can decide on the most appropriate apportioning of devices to file systems. By using the Solaris format(1M) command, you can determine the sizing and partition layout of each DID device and resize the partitions on each DID device, if needed. Given the available DID devices, you can also configure multiple devices and their associated partitions to contain the file systems, according to your sizing requirements.
When you install a Sun StorageTek QFS shared file system in a Sun Cluster environment, you configure the file system's metadata server under the SUNW.qfs resource type. This makes the metadata server highly available and enables the Sun StorageTek QFS shared file system to be globally accessible on all configured nodes in the Sun Cluster environment.
A Sun StorageTek QFS shared file system is typically associated with a scalable application. The Sun StorageTek QFS shared file system is mounted on, and the scalable application is active on, one or more Sun Cluster nodes.
If a node in the Sun Cluster system fails, or if you switch over the resource group, the metadata server resource (Sun StorageTek QFS Sun Cluster agent) automatically relocates the file system's metadata server as necessary. This ensures that the other nodes' access to the shared file system is not affected.
When the Sun Cluster system boots, the metadata server resource ensures that the file system is mounted on all nodes that are part of the resource group. However, the file system mount on those nodes is not monitored. Therefore, in certain failure cases, the file system might be unavailable on certain nodes, even if the metadata server resource is in the online state.
If you use Sun Cluster administrative commands to bring the metadata server resource group offline, the file system under the metadata server resource remains mounted on the nodes. To unmount the file system (with the exception of a node that is shut down), you must bring the metadata server resource group into the unmanaged state by using the appropriate Sun Cluster administrative command.
To remount the file system at a later time, you must bring the resource group into a managed state and then into an online state.
This section shows an example of the Sun StorageTek QFS shared file system installed on raw DID devices with the Sun Cluster data service for Oracle Real Application Clusters. For detailed information on how to use the Sun StorageTek QFS shared file system with the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
As shown in CODE EXAMPLE 6-1, DID devices d4 through d8 are highly available and are contained on controller-based storage. For you to configure a Sun StorageTek QFS shared file system in a Sun Cluster environment, the controller-based storage must support device redundancy by using RAID-1 or RAID-5.
For simplicity in this example, two file systems are created:
Additionally, device d4 is used for Sun StorageTek QFS metadata. This device has two 50-gigabyte slices. The remaining devices, d5 through d8, are used for Sun StorageTek QFS file data.
This configuration involves five main steps, as detailed in the following subsections:
1. Preparing to create Sun StorageTek QFS file systems
2. Creating the file systems and configuring the Sun Cluster nodes
3. Validating the configuration
4. Configuring the network name service
5. Configuring the Sun Cluster data service for Oracle Real Application Clusters
1. From one node in the Sun Cluster system, use the format(1M) utility to lay out partitions on /dev/did/dsk/d4 (CODE EXAMPLE 6-2).
In this example, the action is performed from node scnode-A.
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 50-gigabyte partition. Partition 1 is configured to be the same size as partition 0.
2. On the same node, use the format(1M) utility to lay out partitions on /dev/did/dsk/d5 (CODE EXAMPLE 6-3).
3. Still on the same node, replicate the device d5 partitioning to devices d6 through d8.
This example shows the command for device d6:
4. On all nodes that are potential hosts of the file systems, perform the following:
a. Configure the six partitions into two Sun StorageTek QFS shared file systems by adding two new configuration entries (qfs1 and qfs2) to the mcf file (CODE EXAMPLE 6-4).
For more information about the mcf file, see Function of the mcf File or the Sun StorageTek QFS Installation and Upgrade Guide.
b. Edit the /etc/opt/SUNWsamfs/samfs.cmd file to add the mount options that are required for the Sun Cluster data service for Oracle Real Application Clusters (CODE EXAMPLE 6-5).
fs = qfs2 stripe = 1 sync_meta = 1 mh_write qwrite forcedirectio rdlease = 300 |
For more information about the mount options that are required by the Sun Cluster data service for Oracle Real Application Clusters, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
c. Validate that the configuration is correct.
Be sure to perform this validation after you have configured the mcf file and the samfs.cmd file on each node.
|
Perform this procedure for each file system you are creating. This example describes how to create the qfs1 file system.
1. Obtain the Sun Cluster private interconnect names by using the following command:
2. On each node that is a potential host of the file system, do the following:
a. Use the samd(1M) config command, which signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available:
b. Create the Sun StorageTek QFS shared hosts file for the file system (/etc/opt/SUNWsamfs/hosts.family-set-name), based on the Sun Cluster system's private interconnect names that you obtained in Step 1.
3. Edit the unique Sun StorageTek QFS shared file system's host configuration file with the Sun Cluster system's interconnect names (CODE EXAMPLE 6-6).
For Sun Cluster software failover and fencing operations, the Sun StorageTek QFS shared file system must use the same interconnect names as the Sun Cluster system.
4. From one node in the Sun Cluster system, use the sammkfs(1M) -S command to create the Sun StorageTek QFS shared file system:
5. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorageTek QFS shared file system entry to the /etc/vfstab file:
# cat >> /etc/vfstab <<EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfs1 - /global/qfs1 samfs - no shared EOF |
Perform this procedure for each file system you create. This example describes how to validate the configuration for file system qfs1.
1. If you do not know which node is acting as the metadata server for the file system, use the samsharefs(1M) -R command.
In CODE EXAMPLE 6-7 the metadata server for qfs1 is scnode-A.
2. Use the mount(1M) command to mount the file system first on the metadata server and then on each node in the Sun Cluster system.
Note - It is important that you mount the file system on the metadata server first. |
3. Validate voluntary failover by issuing the samsharefs(1M) -s command, which changes the Sun StorageTek QFS shared file system between nodes:
# samsharefs -s scnode-B qfs1 # ls /global/qfs1 lost+found/ # samsharefs -s scnode-A qfs1 # ls /global/qfs1 lost+found |
4. Validate that the required Sun Cluster resource type is added to the resource configuration:
5. If you cannot find the Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the resource configuration:
6. Register and configure the SUNW.qfs resource type:
# scrgadm -a -g qfs-rg -h scnode-A,scnode-B # scrgadm -a -g qfs-rg -t SUNW.qfs -j qfs-res \ -x QFSFileSystem=/global/qfs1,/global/qfs2 |
7. Use the scswitch(1M) -Z -g command to bring the resource group online:
8. Ensure that the resource group is functional on all configured nodes:
|
This section provides an example of how to configure the data service for Oracle Real Application Clusters for use with Sun StorageTek QFS shared file systems. For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
1. Install the data service as described in the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
2. Mount the Sun StorageTek QFS shared file systems.
3. Set the correct ownership and permissions on the file systems so that the Oracle database operations are successful:
4. As the oracle user, create the subdirectories that are required for the Oracle Real Application Clusters installation and database files:
$ id uid=120(oracle) gid=520(dba) $ mkdir /global/qfs1/oracle_install $ mkdir /global/qfs2/oracle_db |
The Oracle Real Application Clusters installation uses the /global/qfs1/oracle_install directory path as the value for the ORACLE_HOME environment variable that is used in Oracle operations. The Oracle Real Application Clusters database files' path is prefixed with the /global/qfs2/oracle_db directory path.
5. Install the Oracle Real Application Clusters software.
During the installation, provide the path for the installation defined in Step 4 (/global/qfs1/oracle_install).
6. Create the Oracle Real Application Clusters database.
During database creation, specify that you want the database files located in the qfs2 shared file system.
7. If you are automating the startup and shutdown of Oracle Real Application Clusters database instances, ensure that the required dependencies for resource groups and resources are set.
For more information, see the Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS.
Note - If you plan to automate the startup and shutdown of Oracle Real Application Clusters database instances, you must use Sun Cluster software version 3.1 9/04 or a compatible version. |
When you install the unshared Sun StorageTek QFS file system on a Sun Cluster system, you configure the file system for high availability (HA) under the Sun Cluster HAStoragePlus resource type. An unshared Sun StorageTek QFS file system in a Sun Cluster system is typically associated with one or more failover applications, such as highly available network file server (HA-NFS) or highly available ORACLE (HA-ORACLE). Both the unshared Sun StorageTek QFS file system and the failover applications are active in a single resource group; the resource group is active on one Sun Cluster node at a time.
An unshared Sun StorageTek QFS file system is mounted on a single node at any given time. If the Sun Cluster fault monitor detects an error, or if you switch over the resource group, the unshared Sun StorageTek QFS file system and its associated HA applications fail over to another node, depending on how the resource group has been previously configured.
Any file system contained on a Sun Cluster global device group (/dev/global/*) can be used with the HAStoragePlus resource type. When a file system is configured with the HAStoragePlus resource type, it becomes part of a Sun Cluster resource group and the file system under Sun Cluster Resource Group Manager (RGM) control is mounted locally on the node where the resource group is active. When the RGM causes a resource group switchover or fails over to another configured Sun Cluster node, the unshared Sun StorageTek QFS file system is unmounted from the current node and remounted on the new node.
Each unshared Sun StorageTek QFS file system requires a minimum of two raw disk partitions or volume manager-controlled volumes (Solstice DiskSuite/Solaris Volume Manager or VERITAS Volume Manager), one for Sun StorageTek QFS metadata (inodes) and one for Sun StorageTek QFS file data. Configuring multiple partitions or volumes across multiple disks through multiple data paths increases unshared Sun StorageTek QFS file system performance. For information about sizing metadata and file data partitions, see Design Basics.
This section provides three examples of Sun Cluster system configurations using the unshared Sun StorageTek QFS file system. In these examples, a file system is configured in combination with an HA-NFS file mount point on the following:
For simplicity in all of these configurations, ten percent of each file system is used for Sun StorageTek QFS metadata, and the remaining space is used for Sun StorageTek QFS file data. For information about sizing and disk layout considerations, see the Sun StorageTek QFS Installation and Upgrade Guide.
This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on raw global devices. For this configuration, the raw global devices must be contained on controller-based storage. This controller-based storage must support device redundancy through RAID-1 or RAID-5.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. The HAStoragePlus resource type requires the use of global devices, so each DID device (/dev/did/dsk/dx) is accessible as a global device by using the following syntax: /dev/global/dsk/dx.
The main steps in this example are as follows:
1. Prepare to create an unshared file system.
2. Create the file system and configure the Sun Cluster nodes.
3. Configure the network name service and the IP Measurement Protocol (IPMP) validation testing.
4. Configure HA-NFS and configure the file system for high availability.
|
1. Use the format(1M) utility to lay out the partitions on /dev/global/dsk/d4:
Partition (or slice) 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.
2. Replicate the global device d4 partitioning to global devices d5 through d7.
This example shows the command for global device d5:
3. On all nodes that are potential hosts of the file system, perform the following:
a. Configure the eight partitions (four global devices, with two partitions each) into a Sun StorageTek QFS file system by adding a new file system entry to the mcf file.
For information about the mcf file, see Function of the mcf File.
b. Validate that the configuration information you added to the mcf file is correct, and fix any errors in the mcf file before proceeding.
It is important to complete this step before you configure the Sun StorageTek QFS file system under the HAStoragePlus resource type.
|
1. On each node that is a potential host of the file system, issue the samd(1M) config command.
This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.
2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
3. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab <<EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
c. Validate the configuration by mounting and unmounting the file system:
4. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:
5. If you cannot find a required Sun Cluster resource type, use the scrgadm(1M) -a -t command to add it to the configuration:
|
This section provides an example of how to configure the network name service and the IPMP Validation Testing for your Sun Cluster nodes. For more information, see the Sun Cluster Software Installation Guide for Solaris OS, the System Administration Guide: IP Services, and the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).
1. Use vi or another text editor to edit the /etc/nsswitch.conf file so that it looks in the Sun Cluster system and files for node names.
Perform this step before you configure the Network Information Name service (NIS) server.
2. Verify that the changes you made to the /etc/nsswitch.conf are correct:
3. Set up IPMP validation testing using available network adapters.
The adapters qfe2 and qfe3 are used as examples.
a. Statically configure the IPMP test address for each adapter:
b. Dynamically configure the IPMP adapters:
|
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorageTek QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorageTek QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group:
3. Add the NFS logical host to the /etc/hosts table, using the address for your site:
4. Use the scrgadm(1M) -a -L -g command to add the logical host to the NFS resource group:
5. Use the scrgadm(1M) -c -g command to configure the HAStoragePlus resource type:
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B # scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/qfsnfs1 \ -x FilesystemCheckCommand=/bin/true |
6. Bring the resource group online:
7. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:
8. Bring the NFS resource online:
The NFS resource /net/lh-nfs1/global/qfsnfs1 is now fully configured and is also highly available.
9. Before announcing the availability of the highly available NFS file system on the Sun StorageTek QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on volumes controlled by Solstice DiskSuite/Solaris Volume Manager software. With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5 volumes. Typically, Solaris Volume Manager is used only when the underlying controller-based storage is not redundant.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. Solaris Volume Manager requires that DID devices be used to populate the raw devices from which Solaris Volume Manager can configure volumes. Solaris Volume Manager creates globally accessible disk groups, which can then be used by the HAStoragePlus resource type for creating Sun StorageTek QFS file systems.
This example follows these steps:
1. Prepare the Solstice DiskSuite/Solaris Volume Manager software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Configure the network name service and the IPMP validation testing.
5. Configure HA-NFS and configure the file system for high availability.
|
1. Determine whether a Solaris Volume Manager metadatabase (metadb) is already configured on each node that is a potential host of the Sun StorageTek QFS file system:
# metadb flags first blk block count a m p luo 16 8192 /dev/dsk/c0t0d0s7 a p luo 16 8192 /dev/dsk/c1t0d0s7 a p luo 16 8192 /dev/dsk/c2t0d0s7 |
If the metadb(1M) command does not return a metadatabase configuration, then on each node, create three or more database replicas on one or more local disks. Each replica must be at least 16 megabytes in size. For more information about creating the metadatabase configuration, see the Sun Cluster Software Installation Guide for Solaris OS.
2. Create an HA-NFS disk group to contain all Solaris Volume Manager volumes for this Sun StorageTek QFS file system:
3. Add DID devices d4 through d7 to the pool of raw devices from which Solaris Volume Manager can create volumes:
1. Use the format(1M) utility to lay out partitions on /dev/global/dsk/d4:
This example shows that partition or slice 0 skips over the volume's Volume Table of Contents (VTOC) and is then configured as a 20-gigabyte partition. The remaining space is configured into partition 1.
2. Replicate the partitioning of DID device d4 to DID devices d5 through d7.
This example shows the command for device d5:
3. Configure the eight partitions (four DID devices, two partitions each) into two RAID-1 (mirrored) Sun StorageTek QFS metadata volumes and two RAID-5 (parity-striped) Sun StorageTek QFS file data volumes:
a. Combine partition (slice) 0 of these four drives into two RAID-1 sets:
b. Combine partition 1 of these four drives into two RAID-5 sets:
c. On each node that is a potential host of the file system, add the Sun StorageTek QFS file system entry to the mcf file:
For more information about the mcf file, see Function of the mcf File.
4. Validate that the mcf(4) configuration is correct on each node, and fix any errors in the mcf file before proceeding.
|
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.
2. Enable Solaris Volume Manager mediation detection of disk groups, which assists the Sun Cluster system in the detection of drive errors:
3. On each node that is a potential host of the file system, ensure that the NFS disk group exists:
4. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
5. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab << EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
c. Validate the configuration by mounting and unmounting the file system.
Perform this step one node at a time. In this example, the qfsnfs1 file system is mounted and unmounted on one node.
6. Use the scrgadm(1M) -p | egrep command to validate that the required Sun Cluster resource types have been added to the resource configuration:
7. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:
|
To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing
|
This section provides an example of how to configure HA-NFS. For more information about HA-NFS, see the Sun Cluster Data Service for Network File System (NFS) Guide for Solaris OS and your NFS documentation.
1. Create the NFS share point for the Sun StorageTek QFS file system.
Note that the share point is contained within the /global file system, not within the Sun StorageTek QFS file system.
# mkdir -p /global/nfs/SUNW.nfs # echo "share -F nfs -o rw /global/qfsnfs1" > \ /global/nfs/SUNW.nfs/dfstab.nfs1-res |
2. Create the NFS resource group:
3. Add a logical host to the NFS resource group:
4. Configure the HAStoragePlus resource type:
# scrgadm -c -g nfs-rg -h scnode-A,scnode-B # scrgadm -a -g nfs-rg -j qfsnfs1-res -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=/global/qfsnfs1 \ -x FilesystemCheckCommand=/bin/true |
5. Bring the resource group online:
6. Configure the NFS resource type and set a dependency on the HAStoragePlus resource:
7. Use the scswitch(1M) -e -j command to bring the NFS resource online:
The NFS resource /net/lh-nfs1/global/qfsnfs1 is fully configured and highly available.
8. Before you announce the availability of the highly available NFS file system on the Sun StorageTek QFS file system, test the resource group to ensure that it can be switched between all configured nodes without errors and can be taken online and offline:
# scswitch -z -g nfs-rg -h scnode-A# scswitch -z -g nfs-rg -h scnode-B # scswitch -F -g nfs-rg # scswitch -Z -g nfs-rg |
This example shows how to configure the unshared Sun StorageTek QFS file system with HA-NFS on VERITAS Volume Manager-controlled volumes (VxVM volumes). With this configuration, you can choose whether the DID devices are contained on redundant controller-based storage using RAID-1 or RAID-5. Typically, VxVM is used only when the underlying storage is not redundant.
As shown in CODE EXAMPLE 6-1, the DID devices used in this example, d4 through d7, are highly available and are contained on controller-based storage. VxVM requires that shared DID devices be used to populate the raw devices from which VxVM configures volumes. VxVM creates highly available disk groups by registering the disk groups as Sun Cluster device groups. These disk groups are not globally accessible, but can be failed over, making them accessible to at least one node. The disk groups can be used by the HAStoragePlus resource type.
Note - The VxVM packages are separate, additional packages that must be installed, patched, and licensed. For information about installing VxVM, see the VxVM Volume Manager documentation. |
To use Sun StorageTek QFS software with VxVM, you must install the following VxVM packages:
This example follows these steps:
1. Configure the VxVM software.
2. Prepare to create an unshared file system.
3. Create the file system and configure the Sun Cluster nodes.
4. Validate the configuration.
5. Configure the network name service and the IPMP validation testing.
6. Configure HA-NFS and configure the file system for high availability.
This section provides an example of how to configure the VxVM software for use with the Sun StorageTek QFS software. For more detailed information about the VxVM software, see the VxVM documentation.
1. Determine the status of dynamic multipathing (DMP) for VERITAS.
2. Use the scdidadm(1M) utility to determine the HBA controller number of the physical devices to be used by VxVM.
As shown in the following example, the multi-node accessible storage is available from scnode-A using HBA controller c6, and from node scnode-B using controller c7:
# scdidadm -L [ some output deleted] 4 scnode-A:/dev/dsk/c6t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4 4 scnode-B:/dev/dsk/c7t60020F20000037D13E26595500062F06d0 /dev/did/dsk/d4 |
3. Use VxVM to configure all available storage as seen through controller c6:
4. Place all of this controller's devices under VxVM control:
5. Create a disk group, create volumes, and then start the new disk group:
6. Ensure that the previously started disk group is active on this system:
7. Configure two mirrored volumes for Sun StorageTek QFS metadata and two volumes for Sun StorageTek QFS file data volumes.
These mirroring operations are performed as background processes, given the length of time they take to complete.
8. Configure the previously created VxVM disk group as a Sun Cluster-controlled disk group:
Perform this procedure on each node that is a potential host of the file system.
1. Add the Sun StorageTek QFS file system entry to the mcf file.
For more information about the mcf file, see Function of the mcf File.
2. Validate that the mcf(4) configuration is correct, and correct any errors in the mcf file before proceeding:
|
1. On each node that is a potential host of the file system, use the samd(1M) config command.
This command signals to the Sun StorageTek QFS daemon that a new Sun StorageTek QFS configuration is available.
2. From one node in the Sun Cluster system, use the sammkfs(1M) command to create the file system:
3. On each node that is a potential host of the file system, do the following:
a. Use the mkdir(1M) command to create a global mount point for the file system, use the chmod(1M) command to make root the owner of the mount point, and use the chown(1M) command to make the mount point usable by other with read/write (755) access:
b. Add the Sun StorageTek QFS file system entry to the /etc/vfstab file.
Note that the mount options field contains the sync_meta=1 value.
# cat >> /etc/vfstab << EOF # device device mount FS fsck mount mount # to mount to fsck point type pass at boot options # qfsnfs1 - /global/qfsnfs1 samfs 2 no sync_meta=1 EOF |
1. Validate that all nodes that are potential hosts of the file system are configured correctly.
To do this, move the disk group that you created in To Configure the VxVM Software to the node, and mount and then unmount the file system. Perform this validation one node at a time.
# scswitch -z -D nfsdg -h scnode-B # mount qfsnfs1 # ls /global/qfsnfs1 lost+found/ # umount qfsnfs1 |
2. Ensure that the required Sun Cluster resource types have been added to the resource configuration:
3. If you cannot find a required Sun Cluster resource type, add it with one or more of the following commands:
|
To configure the Network Name Service and the IPMP validation testing, follow the instructions in To Configure the Network Name Service and the IPMP Validation Testing
|
To configure HA-NFS and the file system for high availability, follow the instructions in To Configure HA-NFS and the Sun StorageTek QFS File System for High Availability.
If you are configuring a Sun Cluster environment and would like to have shared clients that are outside of the cluster, perform the following configurations.
The example below is based on a two-node metadata server cluster configuration.
The following items must be configured or verified in order to set up shared clients outside the cluster:
The following requirements must be met for the Sun StorageTek QFS metadata server Sun Cluster nodes:
The following requirements must be met for Sun StorageTek QFS metadata client nodes:
The localonly flag must be set on all data devices. Setting the local mode for data devices via the /etc/opt/SUNWsamfs/mcf file identifies which devices are to be used as Sun StorageTek QFS data devices.
Perform the following as root on any nodes running under Sun Cluster:
scconf -r -D name=dsk/dX,nodelist=node2
scconf -c -D name=dsk/dX,localonly=true
Due to the complexity of a configuration that includes both Sun Cluster and Shared Sun StorageTek QFS clients, a separate private network is mandatory for Sun StorageTek QFS metadata traffic. In addition, the following should also be true:
The following minimum software release levels are required:
The following hardware architectures are supported:
The shared storage configuration needs to include hardware-level mirroring with RAID5 support. Servers and clients should use the Sun StorageTek Traffic Manager (MPxIO) configuration, and only shared storage is supported.
The following examples use a configuration consisting of three SPARC Sun Cluster nodes that are identified as follows:
ctelab30 MDS #SPARC Sun Cluster Node
ctelab31 MDS #SPARC Sun Cluster Node
ctelab32 MDC #SPARC QFS Client Node
After installation of the operating system, prepare the nodes by editing the /etc/hosts file on each node.
The following examples illustrate the setup process for the server network. These examples assume the following settings:
For this example the /etc/hosts, /etc/netmasks, /etc/nsswitch.conf, /etc/hostname.qfe1, and /etc/hostname.qfe2 files must be modified on each server cluster node, as follows:
1. Check the /etc/nsswitch.conf file.
2. Append the following to the /etc/netmasks file:
3. Edit the /etc/hostname.qfe1 file so that it contains the following:
ctelab30-4 netmask + broadcast + group qfs_ipmp1 up addif ctelab30-qfe1-test deprecated -failover netmask + broadcast + up
4. Edit the /etc/hostname.qfe2 file so that it contains the following:
ctelab30-qfe2-test netmask + broadcast + deprecated group qfs_ipmp1 -failover standby up
The following examples illustrate the setup process for the client network. These examples assume the following settings:
For this example, the /etc/hosts, /etc/netmasks, /etc/nsswitch.conf, /etc/hostname.qfe1, and /etc/hostname.qfe2 must be modified on each metadata controller (MDC) node, as follows:
1. Check the /etc/nsswitch.conf file and modify as follows:
2. Append the following to the /etc/netmasks file:
3. Edit the /etc/hostname.qfe1 file to contain the following:
After the operating system has been prepared and the nodes have the MPxIO multipathing software enabled, you can install and configure the Sun Cluster software as follows:
1. Install the Sun Cluster software, following the Sun Cluster documentation.
2. Identify shared storage devices to be used as quorum devices.
scdidadm -L
scconf -a -q globaldev=dx
scconf -c -q reset
After the Sun Cluster software has been installed and the cluster configuration has been verified, you can install and configure the Sun StorageTek QFS MDS, as follows:
1. Install the Sun StorageTek QFS software by following the instructions in the Sun StorageTek QFS Installation and Upgrade Guide.
# pkgadd-d . SUNWqfsr SUNWqfsu
2. Using the Sun Cluster command scdidadm -L, identify the devices that will be used for the Sun StorageTek QFS configuration.
3. Edit the mcf file to reflect the file system devices.
4. Set local mode on the MDS Sun StorageTek QFS data devices.
For example, for the Qfs1 file system defined above, the following would be carried out for devices defined as mr devices:
5. Edit the /etc/opt/SUNWsamfs/defaults.conf file.
6. Build the Sun StorageTek QFS file system hosts files.
For information on the hosts files, see the Sun StorageTek QFS Installation and Upgrade Guide and Changing the Shared Hosts File.
To build the shared host table on the MDS, do the following:
a. Use the Sun Cluster scconf command to obtain the host order information. For example:
# /usr/cluster/bin/scconf -p | egrep Cluster node name: |Node private hostname:|Node ID:
b. Make note of the scconf command output. For example:
Cluster node name: ctelab30 Node ID: 1 Node private hostname: clusternode1-priv Cluster node name: ctelab31 Node ID: 2 Node private hostname: clusternode2-priv |
c. Create the shared hosts file.
For example, the /etc/opt/SUNWsamfs/hosts.Qfs1 file would contain the following:
# # MDS # Shared MDS Host file for family set 'Qfs1' # # ctelab30 clusternode1-priv,sc-qfs1 1 - server ctelab31 clusternode2-priv,sc-qfs1 2 - ctelab32 ctelab32-4 - - |
d. Create the local hosts file.
For example, the /etc/opt/SUNWsamfs/hosts.Qfs1.local file would contain the following:
# # MDS # Local MDS Host file for family set 'Qfs1' ctelab30 clusternode1-priv ctelab31 clusternode2-priv |
7. Create the file system using the sammkfs command.
# /opt/SUNWsamfs/sbin/sammkfs -S Qfs1
8. Prepare the mount points on each cluster node.
# mkdir -p /cluster/qfs1 /cluster/qfs2
9. Append file system entries to the /etc/vfstab file.
### # QFS Filesystems ### Qfs1 - /cluster/qfs1 samfs - no shared Qfs2 - /cluster/qfs2 samfs - no shared |
# mount Qfs1, mount Qfs2 no each cluster node
11. Create the Sun Cluster MDS resource group.
Carry out the following steps to create the MDS resource group under Sun Cluster:
# /usr/cluster/bin/scrgadm -a -t SUNW.qfs
b. Create the MDS resource group.
# /usr/cluster/bin/scrgadm -a -g sc-qfs-rg -h ctelab30,ctelab31 # /usr/cluster/bin/scrgadm -c -g sc-qfs-rg -y RG_description= Metadata Server + MDC Clients |
c. Add the logical hostname to the resource group.
# /usr/cluster/bin/scrgadm -a -L -g sc-qfs-rg -l sc-qfs1 -n qfs_ipmp1@ctelab30,qfs_ipmp1@ctelab31 # /usr/cluster/bin/scrgadm -c -j sc-qfs1 -y RG_description= Logical Hostname resource for sc-qfs1 |
d. Add the Sun StorageTek QFS file system resource to the MDS resource group.
# /usr/cluster/bin/scrgadm -a -g sc-qfs-rg -t SUNW.qfs -j fs-qfs-rs -x \ # QFSFileSystem=/cluster/qfs1,/cluster/qfs2 -y Resource_dependencies=sc-qfs1 |
e. Bring the resource group online.
# /usr/cluster/bin/scswitch -Z -g sc-qfs-rg
After the operating system has been installed on all metadata clients, you can proceed to Sun StorageTek QFS client installation and configuration.
Before carrying out these instructions, verify that MPxIO has been enabled and that the clients can access all disk devices.
1. Install the Sun StorageTek QFS software by following the instructions in the Sun StorageTek QFS Installation and Upgrade Guide.
# pkgadd-d . SUNWqfsr SUNWqfsu
2. Use the format command on the MDC and the Sun Cluster scdidadm -L command on the MDS to identify the devices that will be used for the Sun StorageTek QFS configuration.
3. Build the mcf files on the metadata clients.
4. Edit the /etc/opt/SUNWsamfs/defaults.conf file.
5. Build the Sun StorageTek QFS file system hosts files.
Use the information from the MDS hosts files and follow the examples below.
Note - For metadata communications between the MDS and the MDC, clients that are not members of the cluster must communicate over the logical host. |
a. Create the shared hosts file.
For example, the /etc/opt/SUNWsamfs/hosts.Qfs1 file would contain the following:
# # MDC # Shared Client Host file for family set 'Qfs1' ctelab30 sc-qfs1 1 - server ctelab31 sc-qfs1 2 - ctelab32 ctelab32-4 - - |
b. Create the local hosts file.
For example, the /etc/opt/SUNWsamfs/hosts.Qfs1.local file would contain the following:
# # MDC # Local Client Host file for family set 'Qfs1' ctelab30 sc-qfs1@ctelab32-4 ctelab31 sc-qfs1@ctelab32-4 |
6. Create the mount points on each cluster node.
# mkdir -p /cluster/qfs1 /cluster/qfs2
### # QFS Filesystems ### Qfs1 - /cluster/qfs1 samfs - yes bg,shared Qfs2 - /cluster/qfs2 samfs - yes bg,shared |
# mount Qfs1, mount Qfs2 no each MDC node
This section demonstrates how to make changes to, disable, or remove the Sun StorageTek QFS shared or unshared file system configuration in a Sun Cluster environment. It contains the following sections:
|
This example procedure is based on the example in Example Configuration.
1. Log in to each node as the oracle user, shut down the database instance, and stop the listener:
2. Log in to the metadata server as superuser and bring the metadata server resource group into the unmanaged state:
At this point, the shared file systems are unmounted on all nodes. You can now apply any changes to the file systems' configuration, mount options, and so on. You can also re-create the file systems, if necessary. To use the file systems again after re-creating them, follow the steps in Example Configuration.
3. If you want to make changes to the metadata server resource group configuration or to the Sun StorageTek QFS software, remove the resource, the resource group, and the resource type, and verify that everything is removed.
For example, you might need to upgrade to new packages.
# scswitch -n -j qfs-res # scswitch -r -j qfs-res # scrgadm -r -g qfs-rg # scrgadm -r -t SUNW.qfs # scstat |
At this point, you can re-create the resource group to define different names, node lists, and so on. You can also remove or upgrade the Sun StorageTek QFS shared software, if necessary. After the new software is installed, the metadata resource group and the resource can be re-created and can be brought online.
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using raw global devices. This example procedure is based on Example 1: HA-NFS on Raw Global Devices.
1. Use the scswitch(1M) -F -g command to take the resource group offline:
2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using Solstice DiskSuite/Solaris Volume Manager-controlled volumes. This example procedure is based on Example 2: HA-NFS on Volumes Controlled by Solstice DiskSuite/Solaris Volume Manager.
1. Take the resource group offline:
2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
7. Delete RAID-5 and RAID-1 sets:
8. Remove mediation detection of drive errors:
9. Remove the shared DID devices from the nfsdg disk group:
10. Remove the configuration of disk group nfsdg across nodes in the Sun Cluster system:
11. Delete the metadatabase, if it is no longer needed:
|
Use this general example procedure to disable HA-NFS on an unshared Sun StorageTek QFS file system that is using VxVM-controlled volumes. This example procedure is based on Example 3: HA-NFS on VxVM Volumes.
1. Take the resource group offline:
2. Disable the NFS, Sun StorageTek QFS, and LogicalHost resource types:
3. Remove the previously configured resources:
4. Remove the previously configured resource group:
5. Clean up the NFS configuration directories:
6. Disable the resource types used, if they were previously added and are no longer needed:
Sun StorageTek SAM can also be configured for high availability by using Sun Cluster software. By allowing other nodes in a cluster to automatically host the archiving workload when the primary node fails, Sun Cluster software can significantly reduce downtime and increase productivity.
High-availability SAM (HA-SAM) depends on the Sun StorageTek QFS Sun Cluster agent, so this configuration must be installed with a shared Sun StorageTek QFS file system that is mounted and managed by the Sun StorageTek QFS Sun Cluster agent.
For more information see the Sun StorageTek Storage Archive Manager Archive Configuration and Administration Guide.
Copyright © 2007, Sun Microsystems, Inc. All Rights Reserved.