正常删除
1.查看节点状态,如果是pin ,则需要设置为unpin
su - grid
[grid@rac1 ~]$ olsnodes -n -s -t
rac1 1 Active Unpinned
rac2 2 Active Unpinned
rac3 3 Active Unpinned
crsctl unpin css -n rac3
2.查看数据库状态
srvctl config database -d racdb
srvctl status database -db racdb
3.设置ocr备份
su - root
cd /opt/app/12.2.0.1/grid/bin/
./ocrconfig -showbackup
./ocrconfig -manualbackup
4.删除实例(任意节点)
命令图形化界面任选一种
su - root
./srvctl remove instance -db racdb -instance racdb3
su - oracle
dbca
5.查看实例是否删除
crsctl stat res -t
lsnrctl status listener_scan1
select * from gv$instance;
6.停止 and 禁止 监听
srvctl stop listener -l listener -n rac3
srvctl disable listener -l listener -n rac3
srvctl status listener -l listener -n rac3
7.在需要删除的节点上更新NodeList(本地删除)
su - oracle
cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
luster_nodes=rac3" -local
8.删除oracle 软件
cd /opt/app/oracle/product/12.2.0.1/dbhome_1/deinstall
./deinstall -local
9.进入其他所有运行节点更新Nodelist
su - oracle
cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
luster_nodes=rac1,rac2" -local
10.更新inventory(在被要删除的节点上)
su - grid
cd /opt/app/12.2.0.1/grid/oui/bin/
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac3" CRS=TRUE -silent -local
11.禁用集群资源
su - root
cd /opt/app/12.2.0.1/grid/crs/install
./rootcrs.pl -deconfig -force
redhat /centos 7 会报错
---------------------------------------------------------------------------------------
Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/
vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . . ./../../perl/lib) at crsinst
all.pm line 286.
--------------------------------------------------------------------------------------
拷贝下列文件
cp -p /opt/app/12.2.0.1/grid/perl/lib/5.22.0/Env.pm /usr/lib64/perl5/vendor_perl
在此执行
./rootcrs.pl -deconfig -force
12.再此检查资源是否停用
正在运行的节点上执行
su - root
crsctl stat res -t
13.更新所有正在运行节点的NodeList
su - grid
cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac1,rac2" CRS=TRUE -silent -local
14.删除grid 软件
注意如果不指定local会删除集群中所有grid软件
su - grid
cd /opt/app/12.2.0.1/grid/deinstall/
./deinstall -local
删除残余目录
rm -rf /etc/oraInst.loc
rm -rf /opt/ORCLfmap
rm -rf /etc/oratab
15.在正在运行的节点删除rac3
su - root
cd /opt/app/12.2.0.1/grid/bin
./crsctl delete node -n rac3
su - grid
[grid@rac1 ~]$ olsnodes -s -t
rac1 Active Unpinned
rac2 Active Unpinned
故障删除
1.删除实例
通过dbca或者下列命令删除
如果故障节点停机dbca图形化界面获取不到故障节点,则在正常节点的oracle 账户下执行下列命令
dbca -silent -deleteInstance -nodeList rac3 -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword "Oracle123"
2.查看实例是否删除
crsctl stat res -t
lsnrctl status listener_scan1
select * from gv$instance;
srvctl config database -d racdb
3.更新所有正常节点oracle nodelist
su - oracle
cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
luster_nodes=rac1,rac2" -local
4.更新所有正常节点 grid nodelist
su - grid
cd /opt/app/12.2.0.1/grid/oui/bin/
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac1,rac2" CRS=TRUE -silent -local
5.停止和删除vip 节点
su - grid
crsctl stat res -t
su - root
cd /opt/app/12.2.0.1/grid/bin/
./srvctl stop vip -i rac3 停止
./srvctl stop vip -i rac3 -f 强制停止
./crsctl delete node -n rac3 删除节点
6.检查其他节点是否已经删除
su - grid
cluvfy stage -post nodedel -n rac3 -verbose
异常处理
删除节点报错
关闭所有节点,开启主节点,删除故障节点,之后开启其他节点。
./crsctl delete node -n host2node2
CRS-4662: Error while trying to delete node host2node2.
CRS-4000: Command Delete failed, or completed with errors.