Oracle® Real Application Clusters Administrator's Guide 10g Release 1 (10.1) Part Number B10765-01 |
|
|
View PDF |
This chapter describes how to add and delete nodes and instances in Oracle Real Application Clusters (RAC) databases. The topics in this chapter are:
Step 2: Extending Clusterware and Oracle Software to New Nodes
Adding Nodes that Already Have Clusterware and Oracle Software to a Cluster
Deleting Nodes from Oracle Clusters on Windows-Based Platforms
This section explains how to add nodes to clusters. Do this by setting up the new nodes to be part of your cluster at the network level. Then extend the Cluster Ready Services (CRS) home from an existing CRS home to the new nodes and then extend the Oracle database software with RAC components to the new nodes. Finally, make the new nodes members of the existing cluster database.
Note: You can add nodes on some UNIX-based platforms without stopping existing nodes if your clusterware supports this. Refer to your vendor-specific clusterware documentation for more information. |
If the nodes that you are adding to your cluster do not have clusterware or Oracle software, then you must complete the following five steps. The procedures in these steps assume that you already have an operative UNIX-based or Windows-based RAC environment. The details of these steps appear in the following sections.
To add a node to your cluster when the node is already configured with clusterware and Oracle software, follow the procedure described in "Adding Nodes that Already Have Clusterware and Oracle Software to a Cluster".
See Also: The Oracle Real Application Clusters Installation and Configuration Guide for procedures about using the Database Configuration Assistant (DBCA) to create and delete RAC databases |
Complete the following procedures to connect the new nodes to the cluster and to prepare them to support your cluster database:
Connect the new nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. Refer to your hardware vendor documentation for details about this step.
Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. Refer to your hardware vendor documentation for details about this process.
As root user on UNIX-based systems, create the Oracle users and groups using the same user ID and group ID as on the existing nodes. On Windows-based systems, perform the installation as an Administrator.
To verify that your installation is configured correctly, perform the following steps:
Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures in "Step 2: Extending Clusterware and Oracle Software to New Nodes".
If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install the Cluster Ready Services software. In addition, ensure you have enough free space on each new node to install the Oracle binaries.
Ensure that user equivalence is established on the new nodes.
Execute the following platform-specific procedures:
On UNIX-based systems:
Verify user equivalence to and from an existing node to the new nodes using rsh
or ssh
.
On Windows-based systems:
Make sure that you can execute the following command from each of the existing nodes of your cluster where the host_name
is the public network name of the new node:
NET USE \\host_name\C$
You have the required administrative privileges on each node if the operating system responds with:
Command completed successfully.
After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to the clusterware. Configure the new nodes as members of the cluster by extending the cluster software to the new nodes as described in Step 2.
The following topics describe how to add new nodes to the clusterware and to the Oracle database software layers using the OUI:
If you are using a Windows-based system, then skip this section and proceed to the next section. For UNIX-based systems, add the new nodes at the clusterware layer according to the vendor clusterware documentation. For systems using shared storage for the CRS home, ensure that the existing clusterware is accessible by the new nodes. Also ensure that the new nodes can be brought online as part of the existing cluster. Proceed to the next section to add the nodes at the clusterware layer.
On all platforms, complete the following steps. The OUI requires access to the private interconnect that you checked as part of the installation validation in Step 1.
On one of the existing nodes go to the <CRS home>/OUI/bin
directory on UNIX-based systems or to the <CRS home>\oui\bin
directory on Windows-based systems. On UNIX run the addNode.sh
script and on Windows run the addNode.bat
script to start the OUI.
The OUI runs in the add node
mode and the OUI Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page appears.
The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes associated with the CRS home from which you launched the OUI. Use the lower table to enter the public and private node names of the new nodes.
If you are using vendor clusterware, then the public node names automatically appear in the lower table. Click Next and the OUI verifies connectivity on the existing nodes and on the new nodes. The verifications that the OUI performs include determining whether:
The nodes are up
The nodes are accessible by way of the network
The user has write permission to create the CRS home on the new nodes
The user has write permission to the OUI inventory in the oraInventory
directory on UNIX or Inventory
directory on Windows
If the OUI detects that the new nodes do not have an inventory location, then:
On UNIX platforms the OUI displays a dialog asking you to run the oraInstRoot.sh
script on the new nodes
On Windows platforms the OUI automatically updates the inventory location in the Registry key
If any verifications fail, then the OUI re-displays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes; you must correct problems on nodes that are already part of your CRS cluster before you can proceed with node addition. If all the checks succeed, then the OUI displays the Node Addition Summary page.
Note: Oracle strongly recommends that you install CRS on every node in the cluster that has vendor clusterware installed. |
The Node Addition Summary page displays the following information showing the products that are installed in the CRS home that you are extending to the new nodes:
The source for the add node process, which in this case is the CRS home
The private node names that you entered for the new nodes
The new nodes that you entered
The required and available space on the new nodes
The installed products listing the products that are already installed on the existing CRS home
Click Next and the OUI displays the Cluster Node Addition Progress page.
The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following platform-specific content:
On UNIX-based systems this page shows the following four OUI phases:
Instantiate Root Scripts—Instantiates rootaddnode.sh
with the public and private node names that you entered on the Cluster Node Addition page.
Copy the CRS Home to the New Nodes—Copies the CRS home to the new nodes unless the CRS home is on a cluster file system.
Run rootaddnode.sh
and root.sh
—Displays a dialog prompting you to run rootaddnode.sh on the local node from which you are running the OUI. Then you are prompted to run root.sh
on the new nodes.
Save Cluster Inventory—Updates the node list associated with the CRS home and its inventory.
On Windows-based systems, this page shows the following three OUI phases:
Copy CRS Home to New Nodes—Copies the CRS home to the new nodes unless the CRS home is on the Oracle Cluster File System.
Performs Oracle Home Setup—Updates the Registry entries for the new nodes, creates the services, and creates folder entries.
Save Cluster Inventory—Updates the node list associated with the CRS home and its inventory.
For all platforms, the Cluster Node Addition Progress page's Status column displays In Progress
while the phase is in progress, Suspended
when the phase is pending execution, and Succeeded
after the phase completes. On completion, click Exit to end the OUI session. After the OUI displays the End of Node Addition page, click Exit to end the OUI session.
On Windows-based systems, execute the following command to identify the node names and node numbers that are currently in use:
<CRS home>\bin\olsnodes- n
Execute the crssetup.exe
command using the next available node names and node numbers to add CRS information for the new nodes. Use the following syntax for crssetup.exe
where I is the first new node number, <nodeI>
through <nodeI+n>
is a list of the nodes that you are adding, <nodeI-number>
through <nodeI+n-number>
represent the node numbers assigned to the new nodes, and <pnI>
through <pnI+1>
is the list of private networks for the new nodes:
<CRS home>\bin\crssetup.exe -nn <nodeI>,<nodeI-number>,<nodeI+1>,<nodeI+1-number>,...<nodeI+n>,<nodeI+n-number> -pn <pnI>,<nodeI-number>,<pnI+1>,<nodeI+1-number>,...<pnodeI+n>,<nodeI+n-number>
These are the private network names or IP addresses that you entered in Step 3 of this procedure in the Specify Cluster Nodes for Node Addition page. For example:
crssetup.exe -nn node3,3,node4,4 -pn node3_pvt,3,node4_pvt,4
On all platforms, execute the racgons
utility from the bin
subdirectory of the CRS home to configure the Oracle Notification Services (ONS) port number as follows:
racgons <nodeI>:4948 <nodeI+1>:4948 ... <nodeI+n>:4948
After you have completed the procedures in this section for adding nodes at the Oracle clusterware layer, you have successfully extended the CRS home from your existing CRS home to the new nodes. Proceed to Step 3 to prepare storage for RAC on the new nodes.
To extend an existing RAC database to your new nodes, configure storage for the new nodes so that the storage is the same as on the existing nodes. For example, the Oracle Cluster Registry (OCR) and the voting disk must be accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions as on the existing nodes. Prepare the same type of storage on the new nodes as you are using on the other nodes in the RAC environment that you want to extend as follows:
Automatic Storage Management (ASM)
If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes.
Oracle Cluster File System (OCFS)
If you are using Oracle Cluster File Systems, then make sure that the new nodes can access the cluster file systems in the same way that the other nodes access them.
Vendor Cluster File Systems
If your cluster database uses vendor cluster file systems, then configure the new nodes to use the vendor cluster file systems. Refer to the vendor clusterware documentation for the pre-installation steps for your platform.
Raw Device Storage
If your cluster database uses raw devices, then prepare the new raw devices by following the procedures described in the following section.
See Also: The Oracle Real Application Clusters Installation and Configuration Guide for more information about Oracle Cluster File System |
To prepare raw device storage on the new nodes, you need at least two new disk partitions to accommodate the redo logs for each new instance. Make these disk partitions the same size as the redo log partitions that you configured for the existing nodes' instances. Also create an additional logical partition for the undo tablespace for automatic undo management.
On applicable operating systems, you can create symbolic links to your raw devices. Optionally, on all platforms you can create a raw device mapping file and set the DBCA_RAW_CONFIG
environment variable so that it points to the raw device mapping file.
See Also: The Oracle Real Application Clusters Installation and Configuration Guide for more information about configuring raw partitions and using raw device mapping files |
Use your vendor-supplied tools to configure the required raw storage.
Perform the following steps from one of the existing nodes of the cluster:
Create or identify an extended partition.
Click inside an unallocated part of the extended partition.
Choose Create from the Partition menu. A dialog box appears in which you should enter the size of the partition. Ensure you use the same sizes as those you used on your existing nodes.
Click the newly created partition and select Assign Drive Letter from the Tool menu.
Select Don't Assign Drive Letter, and click OK.
Repeat steps 2 through 5 for the second and any additional partitions.
Select Commit Changes Now from the Partition menu to save the new partition information.
Create symbolic links so that the existing nodes and new nodes can recognize the new partitions you just created and the new nodes can recognize the pre-existing symbolic links to logical drives by following these steps:
Start the Object Link Manager (OLM) by entering the following command from the <CRS home>\bin
directory on one of the existing nodes:
GUIOracleOBJManager
The OLM starts and automatically detects the symbolic links to the logical drives and displays them in OLM's graphical user interface.
Recall the disk and partition numbers for the partitions that you created in the previous section. Look for the disk and partition numbers in the OLM page and perform the following tasks:
Right-click next to the box under the New Link column and enter the link name for the first partition.
Repeat the previous step for the second and any additional partitions.
For example, if your RAC database name is db
and it consists of two instances running on two nodes and you are adding a third instance on the third node, then your link names for your redo logs should be db
_redo3_1
, db
_redo3_2
, and so on.
To enable automatic undo management for a new node's instance, enter the link name for the logical partition for the undo tablespace that you created in the previous section. For example, if your RAC database name is db
and if it has two instances running on two nodes and you are adding a third instance on a third node, then your link name for the undo tablespace should be db_undotbs3
.
Select Commit from the Options menu. This creates the new links on the current node.
Select Sync Nodes from the Options menu. This makes the new links visible to all of the nodes in the cluster, including the new nodes.
Select Exit from the Options menu to exit the Object Link Manager.
After completing the procedures in this section, you have configured your cluster storage so that the new nodes can access the Oracle software. Additionally, the existing nodes can access the new nodes and instances. Use the OUI as described in the procedures in Step 4 to configure the new nodes at the RAC database layer.
To add nodes at the Oracle RAC database later, run the OUI in add
node
mode to configure your new nodes. If you have multiple Oracle homes, then perform the following steps for each Oracle home that you want to include on the new nodes:
On an existing node from the $ORACLE_HOME/OUI/bin
directory on UNIX-based systems, run the addNode.sh
script. From the %ORACLE_HOME%\oui\bin
on Windows-based systems, run the addNode.bat
script. This starts the OUI in the add
node
mode and displays the OUI Welcome page. Click Next on the Welcome page and the OUI displays the Specify Cluster Nodes for Node Addition page.
The Specify Cluster Nodes for Node Addition page has a table showing the existing nodes associated with the Oracle home from which you launched the OUI. A node selection table appears on the bottom of this page showing the nodes that are available for addition. Select the nodes that you want to add and click Next.
The OUI verifies connectivity and performs availability checks on both the existing nodes and on the nodes that you want to add. Some of checks performed determine whether:
The nodes are up
The nodes are accessible by way of the network
The user has write permission to create the Oracle home on the new nodes
The user has write permission to the OUI inventory in the oraInventory
directory on UNIX or the Inventory
directory on Windows on the existing nodes and on the new nodes
If the new nodes do not have an inventory set up, then on UNIX-based systems the OUI displays a dialog asking you to run the oraInstRoot.sh script on the new nodes. On Windows-based systems the OUI automatically updates the Registry entries for the inventory location. If any of the other checks fail, then fix the problem and proceed or deselect the node that has the error and proceed. You cannot deselect existing nodes; you must correct problems on the existing nodes before proceeding with node addition. If all of the checks succeed then the OUI displays the Node Addition Summary page.
The Node Addition Summary page has the following information about the products that are installed in the Oracle home that you are going to extend to the new nodes:
The source for the add node process, which in this case is the Oracle home
The existing nodes and new nodes
The new nodes that you selected
The required and available space on the new nodes
The installed products listing all the products that are already installed in the existing Oracle home
Click Finish and the OUI displays the Cluster Node Addition Progress page.
The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following platform-specific content:
On UNIX-based systems, the Cluster Node Addition Progress page shows the following four OUI phases:
Instantiate Root Scripts—Instantiates the root.sh
script in the Oracle home by copying it from the local node
Copy the Oracle Home to the New Nodes—Copies the entire Oracle home from the local node to the new nodes unless the Oracle home is on a cluster file system
Run root.sh
—Displays the dialog prompting you to run root.sh
on the new nodes
Save Cluster Inventory—Updates the node list associated with the Oracle home and its inventory
On Windows-based systems, the Cluster Node Addition Progress shows the following three OUI phases:
Copy the Oracle Home To New Nodes—Copies the entire Oracle home to the new nodes unless the Oracle home is on a cluster file system
Performs Oracle Home Setup—Updates the Registry entries for the new nodes, creates the services, and creates folder entries
Save Cluster Inventory—Updates the node list associated with the Oracle home and its inventory
For all platforms, the Cluster Node Addition Progress page's Status column displays Succeeded
if the phase completes, In
Progress
if the phase is in progress, and Suspended
when the phase is pending execution. After the OUI displays the End of Node Addition page, click Exit to end the OUI session.
On UNIX-based systems only, run the root.sh
script.
Execute the vipca
utility from the bin
subdirectory of the Oracle home using the -nodelist
option with the following syntax identifying the complete set of nodes that are now part of your RAC database beginning with Node1
and ending with <NodeN>
:
vipca -nodelist Node1,Node2,Node3,...<NodeN>
If the private interconnect interface names on the new nodes are not the same as the interconnect names that are on the existing nodes, then configure the private interconnect for the new nodes. Do this by executing the oifcfg
utility with the setif
option from the bin
directory of the Oracle home using the following syntax where <subnet>
is the subnet for the private interconnect of the RAC databases to which you are adding nodes:
oifcfg setif <interface-name>/<subnet>:<cluster_interconnect|public>
For example:
oifcfg setif hme3/172.168.16.0:cluster_interconnect
After completing the procedures in the previous section, you have defined the new nodes at the cluster database layer. You can now add database instances to the new nodes as described in Step 5.
Execute the following procedures on each new node to add instances:
Start the Database Configuration Assistant (DBCA) by entering dbca
at the system prompt from the bin
directory in the $ORACLE_HOME
on UNIX. On Windows-based systems, choose Start Programs > Oracle - HOME_NAME > Configuration and Migration Tools > Database Configuration Assistant.
The DBCA displays the Welcome page for RAC. Click Help on any DBCA page for additional information.
Select Real Application Clusters database, click Next, and the DBCA displays the Operations page.
Select Instance Management, click Next, and the DBCA displays the Instance Management page.
Select Add Instance and click Next. The DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE
, or INACTIVE
.
From the List of Cluster Databases page, select the active RAC database to which you want to add an instance. If your user ID is not operating-system authenticated, then the DBCA prompts you for a user ID and password for a database user that has SYSDBA privileges. If the DBCA prompts you, then enter a valid user ID and password and click Next. The DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the RAC database that you selected.
Click Next to add a new instance and the DBCA displays the Adding an Instance page.
On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that the DBCA provides does not match your existing instance name sequence. Then select the new node name from the list, click Next, and the DBCA displays the Services Page.
Enter the services information for the new node's instance, click Next, and the DBCA displays the Instance Storage page.
If you are using raw devices or raw partitions, then on the Instance Storage page select the Tablespaces folder and expand it. Then select the undo tablespace storage object and a dialog appears on the right-hand side. Change the default datafile name to the raw device name for the tablespace.
If you are using raw devices or raw partitions or if you want to change the default redo log group file name, then on the Instance Storage page select and expand the Redo Log Groups folder. For each redo log group number that you select, the DBCA displays another dialog box.
For UNIX-based systems, enter the raw device name that you created in the section "Configure Raw Storage on UNIX-Based Systems" in the File Name field.
On Windows-based systems, enter the symbolic link name that you created in the section, "Configure Raw Partitions on Windows-Based Systems".
If you are using a cluster file system, then click Finish on the Instance Storage page. If you are using raw devices on UNIX-based systems or disk partitions on Windows-based systems, then repeat step 10 for all of the other redo log groups, click Finish, and the DBCA displays a Summary dialog.
Review the information on the Summary dialog and click OK. Or click Cancel to end the instance addition operation. The DBCA displays a progress dialog showing the DBCA performing the instance addition operation. When the DBCA completes the instance addition operation, the DBCA displays a dialog asking whether you want to perform another operation.
Click No and exit the DBCA, or click Yes to perform another operation. If you click Yes, then the DBCA displays the Operations page.
After you have completed the procedures in this section, the DBCA has successfully added the new instance to the new node and completed the following steps:
Created and started an ASM instance on each new node if the existing instances were using ASM
Created a new database instance on each new node
For Windows-based systems, created and started the required services
Created and configured high availability components
Configured and started node applications for the GSD, Oracle Net Services listener, and Enterprise Manager agent
Created the Oracle Net configuration
Started the new instance
Created and started services if you entered services information on the Services Configuration page
After adding the instances to the new nodes using the steps described in this section, perform any needed service configuration procedures as described in Chapter 4, " Administering Services ".
To add nodes to a cluster that already have clusterware and Oracle software installed on them, you must configure the new nodes with the Oracle software that is on the existing nodes of the cluster. To do this, you must run two versions of an OUI process: one for the clusterware and one for the database layer as described in the following procedures:
Add new nodes at the Oracle clusterware layer by running the OUI from the CRS Home on an existing node according to the following platform-specific procedures:
On UNIX execute the following command:
<CRS home>/oui/bin/addNode.sh -noCopy
On Windows execute the following command:
<CRS home>\oui\bin\addNode.bat -noCopy
Add new nodes at the Oracle software layer by running the OUI from the Oracle Home as follows:
On UNIX, execute:
$ORACLE_HOME/oui/bin/addNode.sh -noCopy
On Windows, execute
%ORACLE_HOME%\oui\bin\addNode.bat -noCopy
In the -noCopy
mode, the OUI performs all add node operations except for the copying of software to the new nodes.
The procedures in this section explain how to use the DBCA to delete an instance from a RAC database. To delete an instance:
Start the DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next, and the DBCA displays the Operations page.
On the DBCA Operations page, select Instance Management, click Next, and the DBCA displays the Instance Management page.
On the Instance Management page, Select Delete Instance, click Next, and the DBCA displays the List of Cluster Databases page.
Select a RAC database from which to delete an instance. If your user ID is not operating-system authenticated, then the DBCA also prompts you for a user ID and password for a database user that has SYSDBA privileges. If the DBCA prompts you for this, then enter a valid user ID and password. Click Next and the DBCA displays the List of Cluster Database Instances page. The List of Cluster Database Instances page shows the instances associated with the RAC database that you selected and the status of each instance.
Select a remote instance to delete and click Finish.
If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database.
Review the information about the instance deletion operation on the Summary page and click OK. Otherwise, click Cancel to cancel the instance deletion operation. If you click OK, then the DBCA displays a Confirmation dialog.
Click OK on the Confirmation dialog to proceed with the instance deletion operation and the DBCA displays a progress dialog showing that the DBCA is performing the instance deletion operation. During this operation, the DBCA removes the instance and the instance's Oracle Net configuration. When the DBCA completes this operation, the DBCA displays a dialog asking whether you want to perform another operation.
Click No and exit the DBCA or click Yes to perform another operation. If you click Yes, then the DBCA displays the Operations page.
At this point, you have accomplished the following:
De-registered the selected instance from its associated Oracle Net Services listeners
Deleted the selected database instance from the instance's configured node
For Windows-based systems, deleted the selected instance's services
Removed the Oracle Net configuration
Deleted the Oracle Flexible Architecture directory structure from the instance's configured node.
Use the following procedures to delete nodes from Oracle clusters on UNIX-based systems:
If there are instances on the node that you want to delete, then execute the procedures in the section titled Deleting Instances from Real Application Clusters Databases before executing these procedures. If you are deleting more than one node, then delete the instances from all the nodes that you are going to delete.
After you have deleted the instances from the node that you want to delete, run the command $ORACLE_HOME/install/rootdeletenode.sh
on the node you are deleting. This deletes the Oracle Cluster Ready Services (CRS) node applications. If you are deleting more than one node from your cluster, then execute the command $ORACLE_HOME/install/rootdeletenode.sh node1,node2,...,<nodeN>
where node1
through <nodeN>
is a comma-separated list of nodes that you want to delete.
On the same node that you are deleting, execute the command runInstaller -updateNodeList ORACLE_HOME=<Home location> CLUSTER_NODES=node1,node2,...<nodeN>
where node1
through <nodeN>
is a comma-separated list of nodes that are remaining in the cluster. This list must exclude the nodes that you are deleting.
If you are not using a cluster file system for the Oracle home, then on the node that you are deleting, remove the Oracle database software by executing the rm
command. Make sure that you are in the correct Oracle home of the node that you are deleting when you execute the rm
command. Execute this command on all the nodes that you are deleting.
On the node that you are deleting, run the command <CRS Home>/install/rootdelete.sh
to disable the CRS applications that are on the node. If the ocr.loc
file is on a shared file system, then execute the command <CRS home>/install/rootdelete.sh remote sharedvar
. If the ocr.loc
file is not on a shared file system, then execute the command <CRS home>/install/rootdelete.sh remote nosharedvar
.
If you are deleting more than one node from your cluster, then repeat this step on each node that you are deleting.
Run <CRS Home>/install/rootdeletenode.sh
on any remaining node in the cluster to delete the nodes from the Oracle cluster and to update the Oracle Cluster Registry (OCR). If you are deleting multiple nodes, then execute the command <CRS Home>/install/rootdeletenode.sh node1,<node1-number>,node2,<node2-number>,...<nodeN>,<node N-number>
where node1
through <nodeN>
is a list of the nodes that you want to delete, and <node1-number>
through <nodeN-number>
represents the node number. To determine the node number of any node, execute <CRS Home>/bin/olsnodes -n
.
On the same node, execute the command runInstaller -updateNodeList ORACLE_HOME=<Home location> CLUSTER_NODES=node1,node2,... <nodeN>
where node1
through <nodeN>
is a comma-separated list of nodes that are remaining in the cluster.
If you are not using a cluster file system, then on the node that you are deleting, remove the Oracle CRS software by executing the rm
command. Make sure that you execute the rm
command from the correct Oracle CRS home. Execute the rm
command on every node that you are deleting.
Execute the following procedures to delete nodes from Oracle clusters on Windows-based platforms:
If there are instances on the node that you want to delete, then execute the procedures in the section titled "Deleting Instances from Real Application Clusters Databases" before continuing with these procedures. If you are deleting more than one node, then delete the instances from all the nodes that you are going to delete.
After you have removed all the instances from the node that you want to delete, run the command %ORACLE_HOME%\bin\srvctl
on the node to delete the Oracle Cluster Ready Services (CRS) node applications. If you are deleting more than one node from your cluster, then execute the command %ORACLE_HOME%\bin\srvctl remove nodeapps -n node1,node2,...,<nodeN>
where node1
through <nodeN>
is a comma-separated list of nodes that you want to delete.
If the node that you are deleting has an ASM instance, then delete the ASM instance using the srvctl stop asm and srvctl remove asm commands.
On the same node, execute the command setup.exe -updateNodeList ORACLE_HOME=<Home location>CLUSTER_NODES=node1,node2,... <nodeN>
where node1
through <nodeN>
is a comma-separated list of the nodes that are remaining in the cluster.
On the same node, delete the Windows Registry entries and ASM services using Oradim.
On the nodes that you want to delete, if you are not using Oracle Cluster File System for the Oracle home, then remove the Oracle database software using the Windows administrative tools. Make sure that you select the correct Oracle home to delete.
From any remaining node in the cluster, disable the CRS applications by running the command <CRS home>\bin\crssetup del -nn node1, <node1-number>,node2,<node2-number>,...<nodeN>,<nodeN-number>
where node1
through <nodeN>
is a list of the nodes that you want delete, and <node1-number>
through <nodeN-number>
represents the node number. To determine the node number of any node, execute the command <CRS Home>\bin\olsnodes -n
.
On the same node, execute the command setup.exe -updateNodeList ORACLE_HOME=<Home location>CLUSTER_NODES=node1,node2,... <nodeN>
where node1
through <nodeN>
is a comma-separated list of the nodes that are remaining in the cluster.
On the nodes that you are deleting, remove the Oracle CRS software using Windows administrative tools. Make sure that you select the correct CRS home to delete. Also delete the CRS, CSS, and EVM services.