English中国 | 日本語

To Utilize Oracle Dump Easilier and More

System Requirements about OS and resource for using RP Dump Browser for Oracle

Corresponding OS

* As for cases other than mentioned above, operation has not confirmed. Although operation might be possible in some cases, but those cases are not eligible for product support

Resource usage information

This software uses temporary files during operation. Usually the temporary files are automatically deleted when the application is closed, whereas they might not be deleted when the application is forcefully closed. Even in such cases those files are automatically deleted next time the application starts and few files are left. Enough free disk space is necessary during operation.

The first file is used to memorize the position of rows and it uses 8 Bytes per one row at most. The necessary disk space  changes in response to the performance when table data is refered (The table data reference is optional).
The table below explains necessary space disk according to the number of rows.

Number of rows means the sum of tables. The disk space for "Fastest" is between 10% and 40% of the size of the taken Dump file.

The second file stores cache when compressed file is directly opened. The file is not made unless compressed file is opened. The necessary disk space changes in response to the size of expanded file.

Presently the software expands compress format inefficiently, and a large amount of disk space is consumed. The consumption ratio may improve as performance is adjusted in the near future.

Log File / Core File
This software outputs log files during operation and core files are output when the system is closed abnormally. Three-day-old or older log files and core files are deleted automatically when the application starts. Although it is difficult to show the correct size of log files, as the size varies according to the contents of dump file and operation, roughly the size of log file becomes 40MBytes when a 2GBytes dump file is read after the entire database is exported. If the number of objects is not many, file size might become a few KBytes at most. Core file becomes between 5Mbytes and 10Mbytes. 
Enough free disk space is necessary to generate log files during operation.


The amount of memory the software uses during operation depends on the number of objects. As long as the numbers object do not change the amount rarely changes even if the number of table records or file size change. It is difficult to show the correct memory usage per one object, as the amount varies between 40 Bytes and 16 K Bytes, roughly the size of Dump file becomes 100 M Bytes to 300 M Bytes when a database which includes around 200,000 objects is exported.  Presently as data pump dump file analyzes only tables, the amount of memory seems small when it reads data pump dump file.