ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 13895

2019-12-10 16:06:47  阅读:444  来源: 互联网

标签:11G database Doc destination Reduce system source incremental oracle


11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)

APPLIES TO:

Oracle Database Cloud Exadata Service - Version N/A and later
Oracle Database Cloud Service - Version N/A and later
Oracle Database - Enterprise Edition - Version 10.2.0.3 and later
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Linux x86-64

PURPOSE

Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure. Before proceeding, review: 考虑使用此过程的新发行版,即版本4。此版本已大大简化了步骤和过程。在继续之前,请查看
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1

When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems that have different endian formats, the amount of downtime required can be substantial because it is directly proportional to the size of the data set being moved.  However, combining XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime required to move data between platforms.

使用跨平台可移动表空间(XTTS)在具有不同字节序格式的系统之间迁移数据时,所需的停机时间可能很长,因为它与要移动的数据集的大小成正比。但是,将XTTS与跨平台增量备份相结合可以大大减少在平台之间移动数据所需的停机时间。

Traditional Cross Platform Transportable Tablespaces  传统的跨平台可移动表空间

The high-level steps in a typical XTTS scenario are the following:  典型的XTTS方案中的高级步骤如下

  1. Make tablespaces in source database READ ONLY  使源数据库中的表空间只读
  2. Transfer datafiles to destination system  将数据文件传输到目标系统
  3. Convert datafiles to destination system endian format  将数据文件转换为目标系统字节序格式
  4. Export metadata of objects in the tablespaces from source database using Data Pump  使用数据泵从源数据库导出表空间中对象的元数据
  5. Import metadata of objects in the tablespaces into destination database using Data Pump  使用数据泵将表空间中对象的元数据导入到目标数据库中
  6. Make tablespaces in destination database READ WRITE  使目标数据库中的表空间读写

Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively unavailable to users for the entire duration of the procedure.  Due to the serial nature of the steps, the downtime required for this procedure is proportional to the amount of data.  If data size is large, datafile transfer and convert times can be long, thus downtime can be long.

因为必须仅在过程的开始就使传输的数据只读,所以拥有该数据的应用程序在整个过程中实际上对用户不可用。由于步骤的顺序性质,此过程所需的停机时间与数据量成正比。如果数据量很大,则数据文件的传输和转换时间会很长,因此停机时间会很长。

Reduce Downtime using Cross Platform Incremental Backup  使用跨平台增量备份减少停机时间

To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental backups, to work in a cross-platform scenario.  By using a series of incremental backups, each smaller than the last, the data at the destination system can be brought almost current with the source system, before any downtime is required.  The downtime required for datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block changes in the source system.

为了减少XTTS所需的停机时间,Oracle增强了RMAN使用增量备份前滚数据文件副本的能力,以在跨平台方案中工作。通过使用一系列增量备份(每个备份增量都小于上一个),可以在不需要任何停机时间之前,将源系统中的数据几乎与源系统中的数据同步。现在,将XTTS与跨平台增量备份结合使用时,数据文件传输和转换所需的停机时间与源系统中数据块更改的速度成比例。

The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as metadata export and import.  Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross Platform Incremental Backup since migration time is typically dominated by metadata operations, not datafile transfer and conversion.

跨平台增量备份功能不会影响对XTTS执行其他操作(如元数据导出和导入)所花费的时间。因此,具有大量元数据(DDL)的数据库将无法从跨平台增量备份中受益,因为迁移时间通常是由元数据操作而不是数据文件传输和转换控制的。

Only those database objects that are physically located in the tablespaces that are being transported will be copied to the destination system. If you need for other objects to be transported, that are located in different tablespaces (such as, for example, pl/sql objects, sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy those objects to the destination system.

仅将物理上位于要传输的表空间中的那些数据库对象复制到目标系统。如果需要传输位于不同表空间中的其他对象(例如位于SYSTEM表空间中的 pl/sql 对象,序列等),则可以使用数据泵来复制这些对象到目标系统。

The high-level steps using the cross platform incremental backup capability are the following:  使用跨平台增量备份功能的高级步骤如下

1.  Prepare phase (source data remains online)  准备阶段(源数据保持online)

    1. Transfer datafiles to destination system  将数据文件传输到目标系统
    2. Convert datafiles, if necessary, to destination system endian format  如有必要,将数据文件转换为目标系统字节序格式

2.  Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database) 前滚阶段(源数据保持 online -重复此阶段多次以捕获目标数据文件副本直至源数据库)

    1. Create incremental backup on source system  在源系统上创建增量备份
    2. Transfer incremental backup to destination system  将增量备份转移到目标系统
    3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies  将增量备份转换为目标系统字节序格式,并将备份应用于目标数据文件副本
NOTE: In Version 3, if a datafile is added to the tablespace OR a new tablespace name is added to the xtt.properties file, a warning and additional instructions will be required.  注意:在版本3中,如果将数据文件添加到表空间或将新表空间名称添加到xtt.properties文件,则将需要警告和其他说明。 

3.  Transport phase (source data is READ ONLY)  传输阶段(源数据为READ ONLY)

    1. Make tablespaces in source database READ ONLY  使源数据库中的表空间只读
    2. Repeat the Roll Forward phase one final time  最后一次重复前滚阶段
      • This step makes destination datafile copies consistent with source database.  此步骤使目标数据文件副本与源数据库一致
      • Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller. 处理增量数据时,此步骤所需的时间明显少于传统XTTS方法,因为增量备份大小较小
    3. Export metadata of objects in the tablespaces from source database using Data Pump 使用数据泵从源数据库导出表空间中对象的元数据
    4. Import metadata of objects in the tablespaces into destination database using Data Pump 使用数据泵将表空间中对象的元数据导入到目标数据库中
    5. Make tablespaces in destination database READ WRITE 使目标数据库中的表空间读写

The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to reduce downtime when transporting tablespaces across platforms.

本文档的目的是提供一个示例,说明如何使用此增强的RMAN跨平台增量备份功能来减少跨平台传输表空间时的停机时间

SCOPE

Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure. Before proceeding, review: 考虑使用此过程的新发行版,即版本4。此版本已大大简化了步骤和过程。在继续之前,请查看:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1  

The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met. 
源系统可以是任何平台,只要满足以下针对平台和数据库所引用和列出的先决条件即可。
If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data Guard.  See Note 413484.1 for details about heterogeneous platform support for Data Guard between your current little endian platform and Oracle Linux.

如果从小端(little)平台迁移到Oracle Linux,则应首先考虑的迁移方法是Data Guard。有关当前的小端(little)平台与Oracle Linux之间对Data Guard的异构平台支持的详细信息,请参见Note 413484.1

This method can also be used with 12c databases, however, for an alternative method for 12c see:

此方法也可以与12c数据库一起使用,但是,有关12c的另一种方法,请参见

Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

NOTE:  Neither method supports 12c multitenant databases.  Enhancement bug 22570430 addresses this limitation.     注意:两种方法都不支持12c多租户数据库。增强错误22570430解决了此限制。 

DETAILS

Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure. Before proceeding, review: 考虑使用此过程的新发行版,即版本4。此版本已大大简化了步骤和过程。在继续之前,请查看:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1  

Overview  总览

This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to an Oracle Exadata Database Machine running Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce downtime.
本文档提供了一个示例,该示例将两个称为 TS1 和 TS2 的表空间从 Oracle Solaris SPARC 系统传输到运行 Oracle Linux 的 Oracle Exadata 数据库计算机,并结合了Oracle跨平台增量备份功能以减少停机时间。
After performing the Initial Setup phase, moving the data is performed in the following three phases:

执行“初始设置”阶段后,将在以下三个阶段中执行数据移动:

Prepare phase  准备阶段
During the Prepare phase, datafile copies of the tablespaces to be transported are transferred to the destination system and converted.  The application being migrated is fully accessible during the Prepare phase.  The Prepare phase can be performed using RMAN backups or dbms_file_transfer.  Refer to the Selecting the Prepare Phase Method section for details about choosing the Prepare phase method. 在准备阶段,将要传输的表空间的数据文件副本传输到目标系统并进行转换。在准备阶段,可以完全访问要迁移的应用程序。可以使用RMAN备份或 dbms_file_transfer 执行“准备”阶段。有关选择准备阶段方法的详细信息,请参阅“选择准备阶段方法”部分。

Roll Forward phase  前滚阶段
During the Roll Forward phase, the datafile copies that were converted during the Prepare phase are rolled forward using incremental backups taken from the source database.  By performing this phase multiple times, each successive incremental backup becomes smaller and faster to apply, allowing the data at the destination system to be brought almost current with the source system.  The application being migrated is fully accessible during the Roll Forward phase.  在前滚阶段中,使用从源数据库获取的增量备份来前滚在准备阶段中转换的数据文件副本。通过多次执行此阶段,每个连续的增量备份将变得更小,更快地应用,从而使目标系统上的数据几乎与源系统保持最新。在前滚阶段,可以完全访问要迁移的应用程序

Transport phase  传输阶段
During the Transport phase, the tablespaces being transported are put into READ ONLY mode, and a final incremental backup is taken from the source database and applied to the datafile copies on the destination system, making the destination datafile copies consistent with source database.  Once the datafiles are consistent, the tablespaces are TTS-exported from the source database and TTS-imported into the destination database.  Finally, the tablespaces are made READ WRITE for full access on the destination database. The application being migrated cannot receive any updates during the Transport phase.  在传输阶段,将要传输的表空间置于只读模式,并从源数据库中获取最终的增量备份,并将其应用于目标系统上的数据文件副本,从而使目标数据文件副本与源数据库一致。一旦数据文件一致,就将表空间从源数据库导出到TTS,并将TTS导入到目标数据库。最后,将表空间设置为READ WRITE以便对目标数据库进行完全访问。在迁移阶段,要迁移的应用程序无法接收任何更新。

Cross Platform Incremental Backup Supporting Scripts  跨平台增量备份支持脚本

The Cross Platform Incremental Backup core functionality is delivered in Oracle Database 11.2.0.4 and later.  See the Requirements and Recommendations section for details.  In addition, a set of supporting scripts in the file rman-xttconvert_2.0.zip are attached to this document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup.  The two primary supporting scripts files are the following:

Oracle数据库11.2.0.4及更高版本提供了跨平台增量备份核心功能。有关详细信息,请参见“要求和建议”部分。此外,此文档随附了文件 rman-xttconvert_2.0.zip 中的一组支持脚本,这些脚本用于管理使用跨平台增量备份执行XTTS所需的过程。以下是两个主要的支持脚本文件

  • Perl script xttdriver.pl - the script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup procedure.  Perl 脚本 xttdriver.pl - 运行该脚本以执行带有跨平台增量备份的XTTS的主要步骤
  • Parameter file xtt.properties - the file that contains your site-specific configuration.  参数文件 xtt.properties - 包含特定于站点的配置的文件

Requirements and Recommendations  要求和建议

This section contains the following subsections:  本节包含以下小节

  • Prerequisites  先决条件
  • Selecting the Prepare Phase Method  选择准备阶段方法
  • Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance  目标数据库11.2.0.3或更早版本需要单独的增量转换主目录和实例

Prerequisites  先决条件

The following prerequisites must be met before starting this procedure:  开始此过程之前,必须满足以下先决条件

  • The limitations and considerations for transportable tablespaces must still be followed.  They are defined in the following manuals:  必须遵循传输表空间的限制和注意事项。它们在以下手册中定义
  • In addition to the limitations and considerations for transportable tablespaces, the following conditions must be met:  除了传输表空间的限制和注意事项之外,还必须满足以下条件
    • The current version does NOT support Windows.  当前版本不支持Windows
    • The source database must be running 10.2.0.3 or higher.   源数据库必须正在运行10.2.0.3或更高版本
    • The source database must have its COMPATIBLE parameter set to 10.2.0 or higher.  源数据库必须将其COMPATIBLE参数设置为10.2.0或更高
    • The source database's COMPATIBLE parameter must not be greater than the destination database's COMPATIBLE parameter.  源数据库的COMPATIBLE参数不得大于目标数据库的COMPATIBLE参数
    • The source database must be in ARCHIVELOG mode.  源数据库必须处于ARCHIVELOG模式
    • The destination database must be running 11.2.0.4 or higher.  目标数据库必须正在运行11.2.0.4或更高版本
    • Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other unix based operating systems.  尽管首选的目标系统是Linux(64位Oracle Linux或RedHat Linux的认证版本),但是此过程可以与其他基于unix的操作系统一起使用
    • The Oracle version of source must be lower or equal to destination. Therefore, this procedure can be used as an upgrade method. Restrictions to transportable tablespace will apply.. 源的Oracle版本必须低于或等于目标。因此,此过程可以用作升级方法。对传输表空间的限制将适用
    • RMAN's default device type should be configured to DISK  RMAN的默认设备类型应配置为DISK
    • RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED.   If so, procedure may return: ORA-19994: cross-platform backup of compressed backups different endianess.  源系统上的RMAN 不得将COMICEED配置为DEVICE TYPE DISK。如果是这样,则过程可能返回:ORA-19994:跨平台备份的压缩备份具有不同的优先级。
    • The set of tablespaces being moved must all be online, and contain no offline data files.  Tablespaces must be READ WRITE.  Tablespaces that are READ ONLY may be moved with the normal XTTS method.  There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.  要移动的表空间集必须全部 online ,并且不包含任何 offline 数据文件。表空间必须是READ WRITE。只读的表空间可以使用普通的XTTS方法移动。无需合并跨平台增量备份来移动始终只读的表空间
  • All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.  此过程中的所有步骤均以作为OSDBA组成员的oracle用户身份运行。操作系统身份验证用于连接到源数据库和目标数据库
  • If the Prepare Phase method selected is dbms_file_transfer, then the destination database must be 11.2.0.4.  See the Selecting the Prepare Phase Method section for details.  如果选择的“准备阶段”方法是 dbms_file_transfer,则目标数据库必须为11.2.0.4。有关详细信息,请参见 选择准备阶段方法 部分
  • If the Prepare Phase method selected is RMAN backup, then staging areas are required on both the source and destination systems.  See the Selecting the Prepare Phase Method section for details.  如果选择的“准备阶段”方法是RMAN备份,则源系统和目标系统上都需要分段区域。有关详细信息,请参见 选择准备阶段方法 部分
  • It is not supported to execute this procedure against a standby or snapshot standby databases.  不支持对备用数据库或快照备用数据库执行此过程
  • If the destination database version is 11.2.0.3 or lower, then a separate database home containing 11.2.0.4 running an 11.2.0.4 instance on the destination system is required to perform the incremental backup conversion.  See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details. If using ASM for 11.2.0.4 Convert Home, then ASM needs to be on 11.2.0.4, else error ORA-15295 (e.g. ORA-15295: ASM instance software version 11.2.0.3.0 less than client version 11.2.0.4.0) is raised. 如果目标数据库版本为11.2.0.3或更低版本,则需要单独的数据库宿主(包含运行目标系统上的11.2.0.4的实例)来执行增量备份转换。有关详细信息,请参见 目标数据库11.2.0.3和更早版本要求单独的增量转换主目录和实例 部分。如果将ASM用于11.2.0.4 Convert Home,则ASM必须在11.2.0.4上,否则出现错误ORA-15295(例如ORA-15295:ASM实例软件版本11.2.0.3.0比客户端版本11.2.0.4.0少)
Whole Database Migration  整个数据库迁移

If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.
如果将使用跨平台增量式备份来减少整个数据库迁移的停机时间,则可以将本文档中的步骤与MAA论文《使用可迁移表空间进行平台迁移:Oracle数据库11g》中提供的XTTS指导相结合。
This method can also be used with 12c databases, however, for an alternative method for 12c see: 此方法也可以与12c数据库一起使用,但是,有关12c的另一种方法,请参见
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

Selecting the Prepare Phase Method  选择准备阶段方法

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods: 在准备阶段,将要传输的表空间的数据文件传输到目标系统,并通过 xttdriver.pl 脚本进行转换。有两种可能的方法:

  1. Using dbms_file_transfer (DFT) transfer (using xttdriver.pl -S and -G options)  使用 dbms_file_transfer (DFT)传输(使用 xttdriver.pl -S and -G 选项)
  2. Using Recovery Manager (RMAN) RMAN backup (using xttdriver.pl -p and -c options)  使用 Recovery Manager (RMAN) RMAN 备份(使用 xttdriver.pl -p and -c 选项)

The dbms_file_transfer method uses the dbms_file_transfer.get_file() subprogram to transfer the datafiles from the source system to the target system over a database link.  The dbms_file_transfer method has the following advantages over the RMAN method: 1) it does not require staging area space on either the source or destination system; 2) datafile conversion occurs automatically during transfer - there is not a separate conversion step.  The dbms_file_transfer method requires the following:  dbms_file_transfer 方法使用 dbms_file_transfer.get_file() 子程序通过数据库链接将数据文件从源系统传输到目标系统。相对于RMAN方法,dbms_file_transfer方法具有以下优点:1)它不需要源系统或目标系统上的暂存区域空间; 2)数据文件转换在传输过程中自动发生-没有单独的转换步骤。 dbms_file_transfer方法需要以下内容

  • A destination database running 11.2.0.4.  Note that an incremental convert home or instance do not participate in dbms_file_transfer file transfers. 运行11.2.0.4的目标数据库。请注意,增量转换主目录或实例不参与 dbms_file_transfer 文件传输。
  • A database directory object in the source database from where the datafiles are copied. 源数据库中复制数据文件的数据库目录对象。
  • A database directory object in the destination database to where the datafiles are placed. 目标数据库中数据文件所在的数据库目录对象。
  • A database link in the destination database referencing the source database. 目标数据库中引用源数据库的DBLink。

The RMAN backup method runs RMAN on the source system to create backups on the source system of the datafiles to be transported.  The backups files must then be manually transferred over the network to the destination system.  On the destination system the datafiles are converted by RMAN, if necessary.  The output of the RMAN conversion places the datafiles in their final location where they will be used by the destination database.  In the original version of xttdriver.pl, this was the only method supported.  The RMAN backup method requires the following:  RMAN备份方法在源系统上运行RMAN,以在要传输的数据文件的源系统上创建备份。然后必须将备份文件通过网络手动传输到目标系统。在目标系统上,如有必要,数据文件由RMAN转换。RMAN转换的输出将数据文件放在它们的最终位置,以供目标数据库使用。在原始版本的 xttdriver.pl 中,这是唯一受支持的方法。RMAN备份方法需要满足以下条件

  • Staging areas are required on both the source and destination systems for the datafile copies created by RMAN.  The staging areas are referenced in the xtt.properties file using the parameters dfcopydir and stageondest.  The final destination where converted datafiles are placed is referenced in the xtt.properties file using the parameter storageondest.  Refer to the Description of Parameters in Configuration File xtt.properties section for details and sizing guidelines.  RMAN创建的数据文件副本在源系统和目标系统上都需要临时区域。使用参数dfcopydir 和 stageondest 在 xtt.properties 文件中引用暂存区域。使用参数 storageondest 在 xtt.properties 文件中引用了放置转换后的数据文件的最终目的地。有关详细信息和大小调整准则,请参考配置文件 xtt.properties 中的参数 描述

Details of using each of these methods are provided in the instructions below.  The recommended method is the dbms_file_transfer method.  以下说明中提供了使用这些方法的详细信息。推荐的方法是dbms_file_transfer方法

Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance  目标数据库11.2.0.3或更早版本需要单独的增量转换主目录和实例

The Cross Platform Incremental Backup core functionality (i.e. incremental backup conversion) is delivered in Oracle Database 11.2.0.4 and later.  If the destination database version is 11.2.0.4 or later, then the destination database can perform this function.  However, if the destination database version is 11.2.0.3 or earlier, then, for the purposes of performing incremental backup conversion, a separate 11.2.0.4 software home, called the incremental convert home, must be installed, and an instance, called the incremental convert instance, must be started in NOMOUNT state using that home.  The incremental convert home and incremental convert instance are temporary and are used only during the migration.  Oracle数据库11.2.0.4及更高版本中提供了跨平台增量备份核心功能(即增量备份转换)。如果目标数据库版本为11.2.0.4或更高版本,则目标数据库可以执行此功能。但是,如果目标数据库版本为11.2.0.3或更早版本,则为了执行增量备份转换,必须安装一个单独的11.2.0.4 software home,称为增量转换home,以及一个实例,称为增量转换实例,必须使用该主目录以NOMOUNT状态启动。增量转换主目录和增量转换实例是临时的,仅在迁移期间使用

Note that because the dbms_file_transfer Prepare Phase method requires destination database 11.2.0.4, which can be used to perform the incremental backup conversions function (as stated above), an incremental convert home and incremental convert instance are usually only applicable when the Prepare Phase method is RMAN backup. 请注意,因为 dbms_file_transfer 准备阶段方法需要目标数据库11.2.0.4,可用于执行增量备份转换功能(如上所述),所以增量转换主目录和增量转换实例通常仅适用于准备阶段方法。 RMAN备份。

For details about setting up a temporary incremental convert instance, see instructions in Phase 1.  有关设置临时增量转换实例的详细信息,请参见阶段1中的说明

Troubleshooting  故障排除

To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable XTTDEBUG=1 before running xttdriver.pl.  Debug mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option.  要启用调试模式,请使用 -d 标志运行 xttdriver.pl,或者在运行 xttdriver.pl 之前设置环境变量 XTTDEBUG = 1。调试模式可启用其他屏幕输出,并使用调试命令行选项执行所有RMAN命令

Known Issues  已知问题

  1. If the source database contains nested IOTs with key compression, then the fix for Bug 14835322 must be installed in the destination database home (where the tablespace plug operation occurs).  如果源数据库包含具有密钥压缩功能的嵌套IOTs,则必须将Bug 14835322的修复程序安装在目标数据库主目录(发生表空间插入操作的位置)中
  2. If you wish to utilize block change tracking on the source database when incremental backups are created, then the fix for Bug 16850197 must be installed in the source database home. 如果要在创建增量备份时在源数据库上利用块更改跟踪,则必须在源数据库主目录中安装针对Bug 16850197的修复程序。
  3. If using ASM in both source and destination, see XTTS Creates Alias on Destination when Source and Destination use ASM (Note 2351123.1)  如果在源和目标中都使用ASM,请参阅XTTS在源和目标使用ASM时在目标上创建别名 (Note 2351123.1)
  4. If the roll forward phase (xttdriver.pl -r) fails with the following errors, then verify RMAN DEVICE TYPE DISK is not configured COMPRESSED.  如果前滚阶段 (xttdriver.pl -r) 失败并出现以下错误,则请验证 RMAN DEVICE TYPE DISK 是否未配置为 COMPRESSED。
    Entering RollForward
    After applySetDataFile
    Done: applyDataFileTo
    Done: RestoreSetPiece
    DECLARE
    *
    ERROR at line 1:
    ORA-19624: operation failed, retry possible
    ORA-19870: error while restoring backup piece
    /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup
    ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup
    piece
    ORA-19837: invalid blocksize 0 in backup piece header
    ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
    ORA-06512: at line 40
    

     

  5. This document can be referred as well for known issues : Note 17866999.8 & If the source contains cluster objects, then run "analyze cluster &cluster_name validate structure cascade" after XTTS has been completed in the target database and if it reports an ORA-1499 open the trace file and review if it has entries like:  对于已知问题,也可以参考该文档:Note 17866999.8 & 如果目标数据库中的XTTS完成并且如果报告 ORA-1499 ,则在源包含群集对象的情况下运行“分析群集&cluster_name验证结构级联”。跟踪文件,并检查其是否具有以下条目
    kdcchk: index points to block 0x01c034f2 slot 0x1 chain length is 256
    kdcchk: chain count wrong 0x01c034f2.1 chain is 1 index says 256
    last entry 0x01c034f2.1 blockcount = 1
    kdavls: kdcchk returns 3 when checking cluster dba 0x01c034a1 objn 90376

    Then to repair this inconsistency either:  然后,要解决此不一致问题,请执行以下操作

    1. rebuild the cluster index.  重建群集索引
    or
    2. Install fix bug 17866999 and run dbms_repair.repair_cluster_index_keycount  安装修复 bug 17866999 并运行 dbms_repair.repair_cluster_index_keycount

    If after repairing the inconsistency the "analyze cluster &cluster_name validate structure cascade" still reports issues then recreate the affected cluster which involves recreating its tables. 如果在修复不一致之后,“分析集群&集群名称验证结构级联”仍然报告问题,则重新创建受影响的集群,这涉及到重新创建其表。

    Note that the fix of bug 17866999 is a workaround fix to repair the index cluster; it will not avoid the problem.Oracle did not find a valid fix for this situation so it will affect any rdbms versions. 请注意,bug 17866999 的修复程序是一种修复索引群集的变通办法。Oracle找不到针对这种情况的有效修复程序,因此它将影响任何rdbms版本。

  6.  

Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup  使用跨平台增量备份减少停机时间的传输表空间

The XTTS with Cross Platform Incremental Backups procedure is divided into the following four phases: 具有跨平台增量备份的XTTS过程分为以下四个阶段

  • Phase 1 - Initial Setup phase  初始设置阶段
  • Phase 2 - Prepare phase  准备阶段
  • Phase 3 - Roll Forward phase  前滚阶段
  • Phase 4 - Transport phase  传输阶段

Conventions Used in This Document  本文档中使用的约定

  • All command examples use bash shell syntax.  所有命令示例都使用bash shell语法
  • Commands prefaced by the shell prompt string [oracle@source]$ indicate commands run as the oracle user on the source system. shell提示符字符串[oracle@source]$开头的命令表示命令在源系统上以oracle用户身份运行
  • Commands prefaced by the shell prompt string [oracle@dest]$ indicate commands run as the oracle user on the destination system.  Shell提示符字符串[oracle@dest]$开头的命令表示命令在目标系统上以oracle用户身份运行

Phase 1 - Initial Setup  初始设置

Perform the following steps to configure the environment to use Cross Platform Incremental Backups: 执行以下步骤以配置环境以使用跨平台增量式备份

Step 1.1 - Install the Destination Database Software and Create the Destination Database  步骤1.1 - 安装目标数据库软件并创建目标数据库

Install the desired Oracle Database software on the destination system that will run the destination database.  It is highly recommended to use Oracle Database 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.  在将运行目标数据库的目标系统上安装所需的Oracle数据库软件。这是 强烈建议 使用 Oracle 数据库11.2.0.4或更高版本。请注意 dbms_file_transfer 准备阶段方法要求目标数据库为11.2.0.4。

Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport. 在目标系统上标识(或创建)数据库以将表空间传输到表空间中,并创建表空间传输所需的 schema users

Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.  根据通用TTS要求,请确保目标数据库中存在表空间传输所需的schema users。
Step 1.2 - If necessary, Configure the Incremental Convert Home and Instance  步骤1.2 - 如有必要,配置增量转换主目录和实例

See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details.  有关详细信息,请参见 目标数据库11.2.0.3和更早版本要求单独的增量转换主目录和实例 部分。

Skip this step if the destination database software version is 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.  如果目标数据库软件版本为11.2.0.4或更高版本,请跳过此步骤。请注意 dbms_file_transfer 准备阶段方法要求目标数据库为11.2.0.4。

If the destination database is 11.2.0.3 or earlier, then you must configure a separate incremental convert instance by performing the following steps:  如果目标数据库是11.2.0.3或更早版本,则必须通过执行以下步骤来配置单独的增量转换实例

    • Install a new 11.2.0.4 database home on the destination system.  This is the incremental convert home.  在目标系统上安装新的11.2.0.4数据库home。 这是增量转换主目录。
    • Using the incremental convert home startup an instance in the NOMOUNT state.  This is the incremental convert instance.  A database does not need to be created for the incremental convert instance.  Only a running instance is required.  使用增量转换主目录启动实例处于NOMOUNT状态。这是增量转换实例。无需为增量转换实例创建数据库。仅需要一个正在运行的实例

The following steps may be used to create an incremental convert instance named xtt running out of incremental convert home /u01/app/oracle/product/11.2.0.4/xtt_home:  可以使用以下步骤来创建名为 xtt 的增量转换实例,该实例将在增量转换主目录 /u01/app/oracle/product/11.2.0.4/xtt_home 中用完

[oracle@dest]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/xtt_home
[oracle@dest]$ export ORACLE_SID=xtt
[oracle@dest]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
db_name=xtt
compatible=11.2.0.4.0
EOF
[oracle@dest]$ sqlplus / as sysdba
SQL> startup nomount
If ASM storage is used for the xtt.properties parameter backupondest (described below), then the COMPATIBLE initialization parameter setting for this instance must be equal to or higher than the rdbms.compatible setting for the ASM disk group used.  如果 xst.properties 参数 backupondest 使用ASM存储(如下所述),则此实例的 COMPATIBLE 初始化参数设置必须等于或高于所用ASM磁盘组的 rdbms.compatible 设置。
Step 1.3 - Identify Tablespaces to be Transported  步骤1.3 - 确定要传输的表空间

Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this document.  As indicated above, the limitations and considerations for transportable tablespaces must still be followed.  在源数据库中标识将要传输的表空间。表空间TS1和TS2将在本文档的示例中使用。如上所述,对于可移动表空间的限制和注意事项仍然必须遵循。

Step 1.4 - If Using dbms_file_transfer Prepare Phase Method, then Configure Directory Objects and Database Links  步骤1.4 - 如果使用 dbms_file_transfer 准备阶段方法,则配置目录对象和 Database Links

Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.  请注意,dbms_file_transfer 准备阶段方法要求目标数据库为11.2.0.4

If using dbms_file_transfer as the Prepare Phase method, then three database objects must be created: 如果使用 dbms_file_transfer 作为“准备阶段”方法,则必须创建三个数据库对象

    1. A database directory object in the source database from where the datafiles are copied 源数据库中复制数据文件的数据库目录对象
    2. A database directory object in the destination database to where the datafiles are placed  目标数据库中放置数据文件的数据库目录对象
    3. A database link in the destination database referencing the source database  目标数据库中引用源数据库的 database link

The source database directory object references the location where the datafiles in the source database currently reside.  For example, to create directory object sourcedir that references datafiles in ASM location +DATA/prod/datafile, connect to the source database and run the following SQL command:  源数据库目录对象引用源数据库中数据文件当前所在的位置。例如,要创建引用ASM位置+DATA/prod/datafile 中的数据文件的目录对象 sourcedir,请连接到源数据库并运行以下SQL命令:

SQL@source> create directory sourcedir as '+DATA/prod/datafile';

The destination database directory object references the location where the datafiles will be placed on the destination system.  This should be the final location where the datafils will reside when in use by the destination database.  For example, to create directory object dstdir that will place transferred datafiles in ASM location +DATA/prod/datafile, connect to the destination database and run the following SQL command:  目标数据库目录对象引用数据文件将在目标系统上放置的位置。这应该是目标数据库在使用数据文件时将驻留的最终位置。例如,要创建目录对象 dstdir 并将转移的数据文件放置在ASM位置 +DATA/prod/datafile 中,请连接到目标数据库并运行以下SQL命令

SQL@dest> create directory destdir as '+DATA/prod/datafile';

The database link is created in the destination database, referencing the source database.  For example, to create a database link named ttslink, run the following SQL command:  database link是在引用源数据库的目标数据库中创建的。例如,要创建名为 ttslink 的 database link,请运行以下SQL命令

SQL@dest> create public database link ttslink connect to system identified by <password> using '<tns_to_source>';

Verify the database link can properly access the source system:  验证 database link 可以正确访问源系统

SQL@dest> select * from dual@ttslink;
Step 1.5 - Create Staging Areas  步骤1.5 - 创建暂存区

Create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: backupformat, backupondest.  按照以下 xtt.properties 参数定义的在源系统和目标系统上创建暂存区域:backupformat,backupondest。

Also, if using RMAN backups in the Prepare phase, create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: dfcopydir, stageondest. 另外,如果在“准备”阶段中使用RMAN备份,请按照以下 xtt.properties 参数定义的在源系统和目标系统上创建暂存区:dfcopydir,stageondest。

Step 1.6 - Install xttconvert Scripts on the Source System   步骤1.6 - 在源系统上安装 xttconvert 脚本

On the source system, as the oracle software owner, download and extract the supporting scripts attached as rman-xttconvert_2.0.zip to this document. 在源系统上,以oracle软件所有者的身份下载并提取作为 rman-xttconvert_2.0.zip 附加到此文档的支持脚本

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman_xttconvert_v3.zip
Archive: rman_xttconvert_v3.zip
inflating: xtt.properties
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
extracting: xttstartupnomount.sql
Step 1.7 - Configure xtt.properties on the Source System  步骤1.7 - 在源系统上配置 xtt.properties

Edit the xtt.properties file on the source system with your site-specific configuration.   For more information about the parameters in the xtt.properties file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below.  使用特定于站点的配置在源系统上编辑 xtt.properties 文件。 有关 xtt.properties 文件中的参数的更多信息,请参考下面附录中的配置文件 xtt.properties 中的参数说明部分

Step 1.8 - Copy xttconvert Scripts and xtt.properties to the Destination System  步骤1.8 - 将 xttconvert 脚本和 xtt.properties 复制到目标系统

As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system. 作为oracle软件所有者,将所有 xttconvert 脚本和修改后的 xtt.properties 文件复制到目标系统。

[oracle@source]$ scp -r /home/oracle/xtt dest:/home/oracle/xtt
Step 1.9 - Set TMPDIR   步骤1.9 - 设置TMPDIR

In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the supporting scripts exist.  Use this shell to run the Perl script xttdriver.pl as shown in the steps below.  If TMPDIR is not set, output files are created in and input files are expected to be in /tmp.  在源系统和目标系统上的Shell环境中,将环境变量TMPDIR设置为支持脚本所在的位置。使用shell运行Perl脚本 xttdriver.pl,如下步骤所示。如果未设置TMPDIR,则会在/tmp中创建输出文件,并且输入文件应位于/tmp中

[oracle@source]$ export TMPDIR=/home/oracle/xtt
[oracle@dest]$ export TMPDIR=/home/oracle/xtt

Phase 2 - Prepare Phase

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods:

  1. Phase 2A - dbms_file_transfer Method
  2. Phase 2B - RMAN Backup Method

Select and use one of these methods based upon the information provided in the Requirements and Recommendations section above.

NOTE:  For large number of files, using dbms_file_transfer has been found to be the fastest method for transferring datafiles to destination.   

Phase 2A - Prepare Phase for dbms_file_transfer Method

Only use the steps in Phase 2A if the Prepare Phase method chosen is dbms_file_transfer and the setup instructions have been completed, particularly those in Step 1.4.

During this phase datafiles of the tablespaces to be transported are transferred directly from source system and placed on the destination system in their final location to be used by the destination database.  If conversion is required, it is performed automatically during transfer.  No separate conversion step is required.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2A.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -S
The prepare step performs the following actions on the source system:
  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttnewdatafiles.txt
    • getfile.sql
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.

Step 2A.2 - Transfer the Datafiles to the Destination System
On the destination system, log in as the oracle user and set the environment (ORACLE_HOME and ORACLE_SID environment variables) to the destination database (it is invalid to attempt to use an incremental convert instance). Copy the xttnewdatafiles.txt and getfile.sql files created in step 2A.1 from the source system and run the -G get_file step as follows:
   NOTE: This step copies all datafiles being transported from the source system to the destination system.  The length of time for this step to complete is dependent on datafile size, and may be substantial.  Use getfileparallel option for parallelism.     

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttnewdatafiles.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/getfile.sql /home/oracle/xtt

# MUST set environment to destination database
[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G

When this step is complete, the datafiles being transported will reside in the final location where they will be used by the destination database.  Note that endian conversion, if required, is performed automatically during this step.

Proceed to Phase 3 to create and apply incremental backups to the datafiles.

Phase 2B - Prepare Phase for RMAN Backup Method

Only use the steps in Phase 2B if the Prepare Phase method chosen is RMAN backup and the setup instructions have been completed, particularly those in Step 1.5.

During this phase datafile copies of the tablespaces to be transported are created on the source system, transferred to the destination system, converted, and placed in their final location to be used by the destination database.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2B.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -p
The prepare step performs the following actions on the source system:
  • Creates datafile copies of the tablespaces that will be transported in the location specified by the xtt.properties parameter dfcopydir.
  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttplan.txt
    • rmanconvert.cmd

The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.

Step 2B.2 - Transfer Datafile Copies to the Destination System
On the destination system, logged in as the oracle user, transfer the datafile copies created in the previous step from the source system.  Datafile copies on the source system are created in the location defined in xtt.properties parameter dfcopydir.  The datafile copies must be placed in the location defined by xtt.properties parameter stageondest.

Any method of transferring the datafile copies from the source system to the destination system that results in a bit-for-bit copy is supported.

If the dfcopydir location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the datafile copies are already available in the expected location on the destination system.
In the example below, scpis used to transfer the copies created by the previous step from the source system to the destination system. [oracle@dest]$ scp oracle@source:/stage_source/* /stage_dest
Note that due to current limitations with cross-endian support in DBMS_FILE_TRANSPORT and ASMCMD, you must use OS-level commands, such as SCP or FTP to transfer the copies from the source system to destination system.

Step 2B.3 - Convert the Datafile Copies on the Destination System
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c

 

The convert datafiles step converts the datafiles copies in the stageondest location to the endian format of the destination system.  The converted datafile copies are written in the location specified by the xtt.properties parameter storageondest.  This is the final location where datafiles will be accessed when they are used by the destination database.

When this step is complete, the datafile copies in stageondest location are no longer needed and may be removed.

Phase 3 - Roll Forward Phase

During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward.  This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database.  The data being transported is fully accessible during this phase.

Step 3.1 - Create an Incremental Backup of the Tablespaces being Transported on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

The create incremental step executes RMAN commands to generate incremental backups for all tablespaces listed in xtt.properties.  It creates the following files used later in this procedure:

  • tsbkupmap.txt
  • incrbackups.txt

Step 3.2 - Transfer Incremental Backup to the Destination System

Transfer the incremental backup(s) created during the previous step to the stageondest location on the destination system.  The list of incremental backup files to copy are found in the incrbackups.txt file on the source system.

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest
If the backupformat location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the incremental backups are already available in the expected location on the destination system.

Step 3.3 - Convert the Incremental Backup and Apply to the Datafile Copies on the Destination System

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the xttplan.txt and tsbkupmap.txt files from the source system and run the rollforward datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

The rollforward datafiles step connects to the incremental convert instance as SYS, converts the incremental backups, then connects to the destination database and applies the incremental backups for each tablespace being transported.


Note:
1.  You must copy the xttplan.txt and tsbkupmap.txt files each time that this step is executed, because their content is different each iteration.  
2.  Do NOT change, copy or make any changes to the xttplan.txt.new generated by the script.  
3.  The destination instance will be shutdown and restarted by this process.

Step 3.4 - Determine the FROM_SCN for the Next Incremental Backup
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s

The determine new FROM_SCN step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1.

Step 3.5 - Repeat the Roll Forward Phase (Phase 3) or Move to the Transport Phase (Phase 4)

At this point there are two choices:

  1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
  2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.

NOTE: If a datafile is added to one a tablespace since last incremental backup and/or a new tablespace name is added to the xtt.properties, the following will appear:

Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace:

Please Do the following:
--------------------------
1. Copy fixnewdf.txt from source to destination temp dir

2. Copy backups:
<backup list>
from <source location> to the <stage_dest> in destination

3. On Destination, run $ORACLE_HOME/perl/bin/perl xttdriver.pl --fixnewdf

4. Re-execute the incremental backup in source:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpincr

NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option:

$ORACLE_HOME/perl/bin/perl xttdriver.pl -L --bkpincr

These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.  

Phase 4 - Transport Phase

 

NOTE:  Be sure the destination database has the necessary objects to allow the import to succeed.  This includes pre-creating the owners of the tables in the tablespace being plugged in.  See information on Transportable Tablespace and the guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.

  

During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database.  The data being transported is accessible only in READ ONLY mode until the end of this phase.

Step 4.1 - Make Source Tablespaces READ ONLY in the Source Database
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.

Step 4.2 - Create the Final Incremental Backup, Transfer, Convert, and Apply It to the Destination Datafiles
Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest

[oracle@source]$ scp xttplan.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:/home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

Step 4.3 - Import Object Metadata into Destination Database
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set.  Note that network_link mode initiates an import over a database link that refers to the source database.  A separate export or dump file is not required.  If you choose to perform the tablespace transport with this command, then you must edit the import command to replace import parameters DIRECTORY, LOGFILE, and NETWORK_LINK with site-specific values.

The following is an example network mode transportable import command:

[oracle@dest]$ impdp directory=DATA_PUMP_DIR logfile=tts_imp.log network_link=ttslink \
transport_full_check=no \
transport_tablespaces=TS1,TS2 \
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \
'+DATA/prod/datafile/ts2.286.771686723', \
'+DATA/prod/datafile/ts2.287.771686743'

After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.

Database users that own objects being transported must exist in the destination database before performing the transportable import.

If you do not use network_link import, then perform the tablespace transport by running transportable mode Data Pump Export on the source database to export the object metadata being transported into a dump file, then transfer the dump file to the destination system, then run transportable mode Data Pump Import to import the object metadata into the destination database.  Refer to the following manuals for details:

Step 4.4 - Make the Tablespace(s) READ WRITE in the Destination Database

The final step is to make the destination tablespace(s) READ WRITE in the destination database.

system@dest/prod SQL> alter tablespace TS1 read write;

Tablespace altered.

system@dest/prod SQL> alter tablespace TS2 read write;

Tablespace altered.

Step 4.5 - Validate the Transported Data
At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.

Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:

RMAN> validate tablespace TS1, TS2 check logical;

 

Phase 5 - Cleanup

If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and the software removed.
Files created by this process are no longer required and may now be removed.  They include the following:
  • dfcopydir location on the source system
  • backupformat location on the source system
  • stageondest location on the destination system
  • backupondest location on the destination system
  • $TMPDIR location in both destination and source systems

Appendix

 

Description of Perl Script xttdriver.pl Options

 The following table describes the options available for the main supporting script xttdriver.pl.

OptionDescription
-S prepare source for transfer

-S option is used only when Prepare phase method is dbms_file_transfer.

Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.  This step creates files xttnewdatafiles.txt and getfile.sql.

-G get datafiles from source

-G option is used only when Prepare phase method is dbms_file_transfer.

Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.  The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.

This option connects to the destination database and runs script getfile.sql.  getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).

-p prepare source for backup

-p option is used only when Prepare phase method is RMAN backup.

Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties.  xttpreparesrc.sql does the following:

  1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
  2. Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.
  3. Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties.  These datafile copies must be transferred manually to the destination system.
  4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.
-c convert datafiles

-c option is used only when Prepare phase method is RMAN backup.

Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format.  Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.

-i create incremental Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.
-r rollforward datafiles Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.
-s determine new FROM_SCN Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS command Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.
-d debug -d option enables debug mode for xttdriver.pl and RMAN commands it executes.  Debug mode can also be enabled by setting environment variable XTTDEBUG=1.
   

 

Description of Parameters in Configuration File xtt.properties

The following table describes the parameters defined in the xtt.properties file that is used by xttdriver.pl.

ParameterDescriptionExample Setting
tablespaces Comma-separated list of tablespaces to transport from source database to destination database. Must be a single line, any subsequent lines will not be read. tablespaces=TS1,TS2
platformid Source database platform id, obtained from V$DATABASE.PLATFORM_ID. platformid=2
srcdir

Directory object in the source database that defines where the source datafiles currently reside. Multiple locations can be used separated by ",". The srcdir to dstdir mapping can either be N:1 or N:N. i.e. there can be multiple source directories and the files will be written to a single destination directory, or files from a particular source directory can be written to a particular destination directory.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srcdir=SOURCEDIR

srcdir=SRC1,SRC2

dstdir

Directory object in the destination database that defines where the destination datafiles will be created.  If multiple source directories are used (srcdir), then multiple destinations can be defined so a particular source directory is written to a particular destination directory.  

This parameter is used only when Prepare phase method is dbms_file_transfer.

dstdir=DESTDIR

dstdir=DST1,DST2

srclink

Database link in the destination database that refers to the source database.  Datafiles will be transferred over this database link using dbms_file_transfer.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srclink=TTSLINK
dfcopydir

Location on the source system where datafile copies are created during the "-p prepare" step.

This location must have sufficient free space to hold copies of all datafiles being transported.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.  See Note 359515.1 for mount option guidelines.

This parameter is used only when Prepare phase method is RMAN backup.

dfcopydir=/stage_source
backupformat Location on the source system where incremental backups are created. 

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.
backupformat=/stage_source
stageondest Location on the destination system where datafile copies are placed by the user when they are transferred manually from the source system.

This location must have sufficient free space to hold copies of all datafiles being transported.

This is also the location from where datafiles copies and incremental backups are read when they are converted in the "-c conversion of datafiles" and "-r roll forward datafiles" steps.

This location may be a DBFS-mounted filesystem.

This location may be an NFS-mounted filesystem that is shared with the source system, in which case it should reference the same NFS location as the dfcopydir and backupformat parameters for the source system.  See Note 359515.1 for mount option guidelines.
stageondest=/stage_dest
storageondest

Location on the destination system where the converted datafile copies will be written during the "-c conversion of datafiles" step.

This location must have sufficient free space to permanently hold the datafiles that are transported.

This is the final location of the datafiles where they will be used by the destination database.

This parameter is used only when Prepare phase method is RMAN backup.

storageondest=+DATA
- or -
storageondest=/oradata/prod/%U
backupondest Location on the destination system where converted incremental backups on the destination system will be written during the "-r roll forward datafiles" step.

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

NOTE: If this is set to an ASM location then define properties asm_home and asm_sid below. If this is set to a file system location, then comment out asm_home and asm_sid parameters below.
backupondest=+RECO
cnvinst_home

Only set this parameter if a separate incremental convert home is in use.

ORACLE_HOME of the incremental convert instance that runs on the destination system.

cnvinst_home=/u01/app/oracle/product/11.2.0.4/xtt_home
cnvinst_sid

Only set this parameter if a separate incremental convert home is in use.

ORACLE_SID of the incremental convert instance that runs on the destination system.

cnvinst_sid=xtt
asm_home ORACLE_HOME for the ASM instance that runs on the destination system.

NOTE: If backupondest is set to a file system location, then comment out both asm_home and asm_sid.
asm_home=/u01/app/11.2.0.4/grid
asm_sid ORACLE_SID for the ASM instance that runs on the destination system. asm_sid=+ASM1
parallel

Defines the degree of parallelism set in the RMAN CONVERT command file rmanconvert.cmd. This file is created during the prepare step and used by RMAN in the convert datafiles step to convert the datafile copies on the destination system.  If this parameter is unset, xttdriver.pl uses parallel=8.

NOTE: RMAN parallelism used for the datafile copies created in the RMAN Backup prepare phase and the incremental backup created in the rollforward phase is controlled by the RMAN configuration on the source system. It is not controlled by this parameter. 

 

parallel=3
rollparallel

Defines the level of parallelism for the -r roll forward operation.

rollparallel=2
getfileparallel

Defines the level of parallelism for the -G operation

Default value is 1. Maximum supported value is 8.

getfileparallel=4

 

Known Issue

Known Issues for Cross Platform Transportable Tablespaces XTTS Document 2311677.1

Change History

 

 

ChangeDate

rman_xttconvert_v3.zip released - adds support for added datafiles

2017-Jun-06

rman-xttconvert_2.0.zip released - add support for multiple source and destination directories

2015-Apr-20

rman-xttconvert_1.4.2.zip released - add parallelism support for -G get file from source operation

2014-Nov-14

rman-xttconvert_1.4.zip released - remove staging area requirement, add parallel rollforward, eliminate conversion instance requirements when using 11.2.0.4.

2014-Feb-21

rman-xttconvert_1.3.zip released - improves handling of large databases with large number of datafiles.

2013-Apr-10

标签:11G,database,Doc,destination,Reduce,system,source,incremental,oracle
来源: https://www.cnblogs.com/zylong-sys/p/12017193.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有