Skip to main content

Oracle Golden Gate Basic Classic

Create environment
01. Source & target - Create GG tablespace.
02. Source & target - Create the GoldenGate Schema Owner
03. Grant privilege including DBA to GG schema owner
04. Add schema owner to global parameter file ./GLOBALS
05. Execute role set up script to create role GGS_GGSUSER_ROLE
06. Grant role GGS_GGSUSER_ROLE to GG user at both Source and target
Configure GG  Extract
01. Source and Target - Configure Manager Parameters
$ ggsci
GGSCI 1> EDIT PARAMS MGR
PORT 7809
DYNAMICPORTLIST 7810-7820
02. Source, create parameter file for Extract - ex1,
EXTRACT ex1
USERID <id>, PASSWORD <pwd>
EXTTRAIL /home/oracle/goldengate/dirdat/ex
TABLE <Schema>.*;
03. Source, configure Data Pump Parameters
EXTRACT dp1
USERID <id>, PASSWORD <pwd>
RMTHOST <hostname>, MGRPORT 7809
RMTTRAIL /home/oracle/goldengate/dirdat/rt
TABLE <schema>.*;
Target - Create check point table
$ ggsci
GGSCI 1> DBLOGIN USERID <id>, PASSWORD <pwd>
GGSCI 2> ADD CHECKPOINTTABLE <Schema>.checkpointtable
Add checkpoint table to ./GLOBALS
$ ggsci
GGSCI 1> EDIT PARAMS ./GLOBALS
GGSCHEMA <Scheam>
CHECKPOINTTABLE <Schema>.checkpointtable
Configure GG Replicate 
Target - create parameter file for rep1
$ ggsci
GGSCI 1> EDIT PARAMS rep1
REPLICAT rep1
USERID <id>, PASSWORD <pwd>
ASSUMETARGETDEFS
DISCARDFILE /home/oracle/goldengate/discards, PURGE
MAP <Schema>.*, TARGET <Schema>.*;
Note:You can use APPEND in place of PURGE
Source server configure supplemental logging for all tables that will be replicated
$ ggsci
GGSCI 1> DBLOGIN USERID <id>, PASSWORD <pwd>
GGSCI 2> ADD TRANDATA <TableName>
Source Server Add Extract
$ ggsci
GGSCI 1> ADD EXTRACT ex1, TRANLOG, BEGIN NOW
Source Add the Extract Trail
$ ggsci
GGSCI 1> ADD EXTTRAIL /home/oracle/goldengate/dirdat/ex, EXTRACT ex1
Source Add the Data Pump Process
$ ggsci
GGSCI 1> ADD EXTRACT dp1 EXTTRAILSOURCE /home/oracle/goldengate/dirdat/ex
Source Add the Data Pump Trail
On the source server add the Data Pump trail (/home/oracle/gg/dirdat/rt).
This trail is created on the target server.
However, the name is required in order to set up the Data Pump process on the source server.
$ ggsci
GGSCI 1> ADD RMTTRAIL /home/oracle/goldengate/dirdat/rt, EXTRACT dp1
Target Add the Replication Process
$ ggsci
GGSCI 1> ADD REPLICAT rep1, EXTTRAIL /home/oracle/goldengate/dirdat/rt
Source Start Manager
$ ggsci
GGSCI 1> START MANAGER
Target start the Manager
$ ggsci
GGSCI 1> START MANAGER
Source Start Extract Process
$ ggsci
GGSCI 1> START EXTRACT ex1
Verify that the Extract
$ ggsci
GGSCI > INFO EXTRACT ex1
Start Data Pump Process
$ ggsci
GGSCI 3> START EXTRACT dp1
Verify Data Pump 
$ ggsci
GGSCI 2> INFO EXTRACT dp1
Target Start Replication
$ ggsci
GGSCI 1> START REPLICAT rep1
Verify Replication 
$ ggsci
GGSCI 2> INFO REPLICAT rep1

Comments

Popular posts from this blog

Linux vs Windows, Find out Linux or Windows is best fit for requirement

When users are having question about whether Linux or Windows is best for them, and they need helps to make decision whether to use Linux or Windows. Here we are going to explore Linux v/s Windows for several factor such as when to learn, career opportunity, hardware cost, software cost, license cost, clone cost, protection against virus, ease of use, flexibility etc. so the user can make a decision about to use Windows or Linux Training.

When to Learn: -
Kids start using windows when they go to the pre-school at the age of 3 to 5 year to play the education game. Some kids start using windows in middle school for homework. Windows operating system becomes part of our life.
While most of the time person starts learning Linux when it is required by the circumstances such as office, where they work running business application on open source software. Software engineers are learning Linux during their college education. Some people are learning Linux if they want to work in the cloud compu…

Oralce Pluggable Database

Pluggable database administration:-

Open all pluggable databases:
Connect to container database using oraenv
$. oraenv=DB_Name
$sqlplus "/as sysdba"
SQL> alter pluggable database all open;
Get pluggable database status:
SQL> select con_id, dbid, name, open_mode from v$pdbs;
Switch to Puggable database from container :
SQL> alter session set container = <Puggable_DB>;
Get status of Pluggable database
SELECT v.name, v.open_mode, NVL(v.restricted, 'n/a') "RESTRICTED", d.status
FROM v$pdbs v, dba_pdbs d WHERE v.guid = d.guid ORDER BY v.create_scn;
Get file location for pluggable database:
SQL>select file_name from dba_data_files;
Create table space for pluggable datatbase:
SQL> alter session set container=<Pluggable_DB>;
Session altered.
SQL> CREATE TABLESPACE TB_Name LOGGING DATAFILE '/<Path>/<File_Name>.dbf' SIZE 1024M REUSE AUTOEXTEND ON NEXT 8192K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
Set new t…