Oracle Export and Import utilities tips
In spite of the fact that Oracle recommends using the new Data Pump tools (expdp/impdp) available as of Oracle Database 10g, many developers and DBAs still use the original Oracle Export and Import (exp/imp) utilities. Below I will quickly summarize some useful Export and Import tips.
Advantages of Export and Import utilities compared to Data Pump
– No need to do the environment preparation (like directory creation, server access, etc.)
– Simple way to dump database objects into a remote location
– Possibility of compressing an Export dump on the fly through a UNIX pipe or if required to export directly to an import session.
– Consume less rollback/undo tablespace than Data Pump
– Can merge extents of tables with option COMPRESS that is not available in Data Pump
Export and Import general usage tips
Operate with dump and log files on the same machine where invoke exp/imp utilities. This may speed up the process and avoid complex network issues analyze. NFS can slow down considerably.
If you run multiple export/import sessions, ensure they write to different physical disks.
Whenever possible ensure NLS_LANG variable of your source and destination database/client has the same character set, so you can completely avoid any character conversion.
You may need to patch your Oracle client (where you are running exp/imp) to the same level as the Oracle server to prevent errors
Ensure you have minimum contention and enough space in rollback/undo, temp, import destination tablespaces and archive log destination.
Export performance tips
Export the optimizer statistics to a new table prior to take an export of a schema / database
Reserve double the amount of free disk space for an export file compared to the used size of your exported objects. There are some more accurate estimate techniques of the dump file size though.
Good practice to check the validity of your dump file right after the export by doing an import with the parameter SHOW=Y
Most of Export issues and bugs are related to DIRECT mode. If a suspicious issue occurs, try it with conventional export.
Export parameters to consider:
direct=y – direct path export can be much faster but ensure no updates performed on the same table and size of rollback/undo segments is sufficient; character set of the export session should be equal to database character set
recordlength=65535 – may improve performance for direct export
statistics=none – statistics handling by Export is tricky; better transfer it or gather after the import.
consistent=n – default behavior; faster compared to consistent=y but requires no data change during the export
direct=n buffer=3000000 – only when direct mode is not possible or just to be safe
QUERY and BUFFER options apply ONLY to conventional path export.
Import performance tips
The way you did the export has no influence on how the data is imported. In other words, the export mode or time taken does not influence the time it takes to import the data. Thus, you have to tweak Import parameters always to achieve good performance results.
When importing large amounts of data consider dropping existing indexes prior to the import. Creation of indexes manually with INDEXFILE option is not required as the latest versions of imp utility can properly create ALL indexes after the data is inserted.
If possible switch database to NOARCHIVELOG mode to reduce I/O contention on redo logs destinations – online and archive ones.
Import parameters to consider:
commit=n – best for performance but only if your rollback/undo segments can afford it
commit=y buffer= 3000000 – to be safe on large objects or in case commit=y does not work
Export issues solutions
EXP-00008: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number 99 with name “_SYSSMU99$” too small
EXP-00000: Export terminated unsuccessfully
Solution: Before extending Undo/Rollback segments, try to export with consistent=n