删除实例—>删除oracle软件—>删除grid软件

正常删除

1.查看节点状态,如果是pin ,则需要设置为unpin

  1. su - grid
  2. [grid@rac1 ~]$ olsnodes -n -s -t
  3. rac1 1 Active Unpinned
  4. rac2 2 Active Unpinned
  5. rac3 3 Active Unpinned
  6. crsctl unpin css -n rac3

2.查看数据库状态

  1. srvctl config database -d racdb
  2. srvctl status database -db racdb

3.设置ocr备份

  1. su - root
  2. cd /opt/app/12.2.0.1/grid/bin/
  3. ./ocrconfig -showbackup
  4. ./ocrconfig -manualbackup

4.删除实例(任意节点)

命令图形化界面任选一种

  1. su - root
  2. ./srvctl remove instance -db racdb -instance racdb3
  3. su - oracle
  4. dbca

image.png
image.png
image.png
image.png

5.查看实例是否删除

  1. crsctl stat res -t
  2. lsnrctl status listener_scan1
  3. select * from gv$instance;

6.停止 and 禁止 监听

  1. srvctl stop listener -l listener -n rac3
  2. srvctl disable listener -l listener -n rac3
  3. srvctl status listener -l listener -n rac3

7.在需要删除的节点上更新NodeList(本地删除)

  1. su - oracle
  2. cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
  4. luster_nodes=rac3" -local

8.删除oracle 软件

  1. cd /opt/app/oracle/product/12.2.0.1/dbhome_1/deinstall
  2. ./deinstall -local

9.进入其他所有运行节点更新Nodelist

  1. su - oracle
  2. cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
  4. luster_nodes=rac1,rac2" -local

10.更新inventory(在被要删除的节点上)

  1. su - grid
  2. cd /opt/app/12.2.0.1/grid/oui/bin/
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac3" CRS=TRUE -silent -local

11.禁用集群资源

  1. su - root
  2. cd /opt/app/12.2.0.1/grid/crs/install
  3. ./rootcrs.pl -deconfig -force
  4. redhat /centos 7 会报错
  5. ---------------------------------------------------------------------------------------
  6. Can't locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/
  7. vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . . ./../../perl/lib) at crsinst
  8. all.pm line 286.
  9. --------------------------------------------------------------------------------------
  10. 拷贝下列文件
  11. cp -p /opt/app/12.2.0.1/grid/perl/lib/5.22.0/Env.pm /usr/lib64/perl5/vendor_perl
  12. 在此执行
  13. ./rootcrs.pl -deconfig -force

12.再此检查资源是否停用

  1. 正在运行的节点上执行
  1. su - root
  2. crsctl stat res -t

13.更新所有正在运行节点的NodeList

  1. su - grid
  2. cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac1,rac2" CRS=TRUE -silent -local

14.删除grid 软件

注意如果不指定local会删除集群中所有grid软件

  1. su - grid
  2. cd /opt/app/12.2.0.1/grid/deinstall/
  3. ./deinstall -local
  4. 删除残余目录
  5. rm -rf /etc/oraInst.loc
  6. rm -rf /opt/ORCLfmap
  7. rm -rf /etc/oratab

15.在正在运行的节点删除rac3

  1. su - root
  2. cd /opt/app/12.2.0.1/grid/bin
  3. ./crsctl delete node -n rac3
  4. su - grid
  5. [grid@rac1 ~]$ olsnodes -s -t
  6. rac1 Active Unpinned
  7. rac2 Active Unpinned

故障删除

1.删除实例

通过dbca或者下列命令删除
如果故障节点停机dbca图形化界面获取不到故障节点,则在正常节点的oracle 账户下执行下列命令

  1. dbca -silent -deleteInstance -nodeList rac3 -gdbName racdb -instanceName racdb3 -sysDBAUserName sys -sysDBAPassword "Oracle123"

2.查看实例是否删除

  1. crsctl stat res -t
  2. lsnrctl status listener_scan1
  3. select * from gv$instance;
  4. srvctl config database -d racdb

3.更新所有正常节点oracle nodelist

  1. su - oracle
  2. cd /opt/app/oracle/product/12.2.0.1/dbhome_1/oui/bin
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/12.2.0.1/dbhome_1 "c
  4. luster_nodes=rac1,rac2" -local

4.更新所有正常节点 grid nodelist

  1. su - grid
  2. cd /opt/app/12.2.0.1/grid/oui/bin/
  3. ./runInstaller -updateNodeList ORACLE_HOME=/opt/app/12.2.0.1/grid " cluster_nodes=rac1,rac2" CRS=TRUE -silent -local

5.停止和删除vip 节点

  1. su - grid
  2. crsctl stat res -t
  3. su - root
  4. cd /opt/app/12.2.0.1/grid/bin/
  5. ./srvctl stop vip -i rac3 停止
  6. ./srvctl stop vip -i rac3 -f 强制停止
  7. ./crsctl delete node -n rac3 删除节点

6.检查其他节点是否已经删除

  1. su - grid
  2. cluvfy stage -post nodedel -n rac3 -verbose

异常处理

删除节点报错
关闭所有节点,开启主节点,删除故障节点,之后开启其他节点。

  1. ./crsctl delete node -n host2node2
  2. CRS-4662: Error while trying to delete node host2node2.
  3. CRS-4000: Command Delete failed, or completed with errors.