时间:2021-07-01 10:21:17 帮助过:34人阅读
Oracle 11gR2 RAC和GoldenGate都是Oracle比较热门的产品,经过简单的学习和阅读文档,配置单节点的GoldenGate进行数据的复制相信
Oracle 11gR2 RAC和GoldenGate都是Oracle比较热门的产品,经过简单的学习和阅读文档,配置单节点的GoldenGate进行数据的复制相信不是什么太有难度的事情,但是对于利用GoldenGate进行RAC系统到RAC系统的复制,还是有些配置的技巧和策略设置的,前阵子就遇到一个这样一个问题:假设源和目标分别是两节点的RAC系统,如何保证目标部分节点失效的时候replicate会自动切换? 其实如果了解GG的工作机制和RAC的资源管理,问题的解决就十分清晰了。
今天就从系统的介绍下11gR2 RAC上OGG (Oracle GoldenGate的简称,下同)的完整配置步骤,,并简单谈谈如何解决上面这个场景的问题。
第一阶段:下载OGG(可以参考之前的单节点的复制例子,不再赘述)
OGG的下载地址
第二阶段:OGG的安装
1)登录源端的RAC系统中的任一个节点,并在ACFS上建立一个供OGG使用的共享目录,比如叫/cloudfs/goldengate
2)解压OGG的安装包到/cloudfs/goldengate目录
3) 设置好OGG工作的环境变量,比如
export LIBRARY_PATH=/cloudfs/goldengate:$ORACLE_HOME/lib:$LD_LIBRARY_PATH
4)启动ggsci并创建目录,然后进行必要的设置,启动manager
$ ggsci
GGSCI > create subdirs
(optional, support for DDL/Sequence)
Create and edit the parameter file for GLOBALS:
GGSCI > EDIT PARAMS ./GLOBALS
Add this line to GLOBALS parameter file:
GGSCHEMA ggs
NOTE: 'ggs' is the example OGG user and will be used in the rest of this document.
GGSCI > EDIT PARAMS mgr
Add the following lines to Manager parameter file:
PORT 7809
AUTOSTART ER *
AUTORESTART ER *
GGSCI > START mgr
5)在目标端重复上面的步骤1-4,注意目录名的使用,我们在目标端使用/mycloudfs/goldengate以示区分。
第三阶段:源和目标RAC数据库准备步骤
1) Create OGG user 'ggs' on both the source and target database, connect to database using SQL*Plus as SYSDBA:
SQL> CREATE USER ggs IDENTIFIED BY ggs;
SQL> GRANT CONNECT,RESOURCE,DBA TO ggs;
2)(optional, add Oracle sequence replication support) On both source and target database, go to OGG directory and run this SQL, enter OGG user 'ggs' as prompted:
SQL> @sequence.sql
3) Enable supplemental logging on source ODA database:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
SQL> ALTER SYSTEM SWITCH LOGFILE;
(Optional) Add Oracle DDL replication support
4) On the source system, go to OGG directory, connect to database using SQL*Plus as SYSDBA.
SQL> GRANT EXECUTE ON utl_file TO ggs;
5 On the source system, run the following script, provide OGG user 'ggs' as prompted.
SQL> @marker_setup.sql
SQL> @ddl_setup.sql
NOTE: enter 'INITIALSETUP' when prompted for the mode of installation.
SQL> @role_setup.sql
SQL> @ddl_enable.sql
SQL> @ddl_pin ggs
NOTE: 'ggs' here is the OGG user.
第四阶段:配置源端的extract group
1) Issue the following command to log on to the database.
GGSCI > DBLOGIN USERID ggs, PASSWORD ggs
2) Create a primary Extract group 'myext':
GGSCI > ADD EXTRACT myext, TRANLOG, BEGIN NOW, THREADS 2
NOTE: THREADS value is the number of your RAC instances.
3) Create a local trail. The primary Extract writes to this trail, and the data-pump Extract reads it.
GGSCI > ADD EXTTRAIL /cloudfs/goldengate/dirdat/et, EXTRACT myext
NOTE: 'et' is the example trail identifier for Extract 'myext'.
4) Create and edit the parameter file for Extract 'myext':
GGSCI > EDIT PARAMS myext
Add following lines to this parameter file:
EXTRACT myext
SETENV (ORACLE_HOME = "/u01/app/oracle/product/11.2.0/dbhome_1")
USERID ggs@ggdb, PASSWORD ggs
TRANLOGOPTIONS DBLOGREADER
THREADOPTIONS MAXCOMMITPROPAGATIONDELAY 20000
EXTTRAIL /cloudfs/goldengate/dirdat/et
DYNAMICRESOLUTION
DDL INCLUDE ALL
TABLE hr.*;
NOTE 1: make sure the SQL*Net connection string 'ggdb' works.
NOTE 2: 'hr' is the example schema which will be synchronized to the target system.
第五阶段:在源端配置data pump extract group
1)Create a data pump group 'mypump':
GGSCI > ADD EXTRACT mypump, EXTTRAILSOURCE /cloudfs/goldengate/dirdat/et, BEGIN now
2) Specify a remote trail that will be created on the target system.
GGSCI > ADD RMTTRAIL /mycloudfs/goldengate/dirdat/rt, EXTRACT mypump
NOTE: 'rt' is the example trail identifier for Extract 'mypump', and use the target OGG directory '/mycloudfs/goldengate' here.
3) Create and edit the parameter file for Extract 'mypump':
GGSCI > EDIT PARAMS mypump
Add following lines to this parameter file:
EXTRACT mypump
RMTHOST rac12box-scan, MGRPORT 7809
RMTTRAIL /mycloudfs/goldengate/dirdat/rt
PASSTHRU
TABLE hr.*;
NOTE: RMTHOST is the target host. If you also prefer to set up HA on the target system, specify the VIP for your target system as RMTHOST,
otherwise just use the IP address/hostname of your target system.
4) Start Extract 'myext' and 'mypump':
GGSCI > START myext
GGSCI > START mypump
5) Check the status of OGG processes:
GGSCI > info all
第六阶段:在目标端配置Replicat group
1) Create a Replicat group 'rept', which reads trails from Extract 'mypump':
GGSCI > ADD REPLICAT rept, EXTTRAIL /mycloudfs/goldengate/dirdat/rt, nodbcheckpoint
2) Create and edit the parameter file for Replicat 'rept':
GGSCI > EDIT PARAMS rept
Add following lines to this parameter file, assume the same ORACLE_HOME and target database 'ggdb' as in source ODA environment:
REPLICAT rept
SETENV (ORACLE_HOME = "/u01/app/oracle/product/11.2.0/dbhome_1")
USERID ggs@ggdb,PASSWORD ggs
ASSUMETARGETDEFS
HANDLECOLLISIONS
REPERROR (DEFAULT, DISCARD)
DDLERROR DEFAULT DISCARD
DDLOPTIONS REPORT
DISCARDFILE /mycloudfs/goldengate/repsz.dsc,append,megabytes 100
MAP hr.*, TARGET hr.*;
NOTE: make sure the SQL*Net connection string 'ggdb' works.
3) Start Replicat 'rept':
GGSCI > START rept
4) Check the status of OGG processes:
GGSCI > info all
第七阶段:验证Goldengate功能
1) Log on to source database as user 'hr', do some simple DDL and DML operations.
2) Check the data change has been captured by Extract on source system:
GGSCI > STATS myext
GGSCI > STATS mypump
3)Log on to target system as oracle user, check the status of Replicat 'rept':
GGSCI > STATS rept
4) Compare the output and make sure data change is synchronized.
5)(optional, for further HA setup) Stop OGG on source system:
GGSCI > STOP myext
GGSCI > STOP mypump
GGSCI > STOP mgr
6)(optional, for further HA setup) Stop OGG on target system:
GGSCI > STOP rept
GGSCI > STOP mgr
关于HA的设置,请继续关注下篇。