老外的EXP&IMP總結
寫的不錯,方方面面都寫的很認真,要是我也有這麼認真就好了- -[@more@]Most DBAs know this old trick of using a named pipe in import or export on UNIX or Linux (not for
Windows although it has the concept of named pipes). Most of the time you do this is when you need
to export to or import from a compressed file.
mknod tmppipe p #Create a named pipe, can also be: mkfifo tmppipe
#export
exp username/password file=tmppipe &
cat tmppipe | gzip > expdat.dmp.gz
#import
cat expdat.dmp.gz | gunzip > tmppipe &
#Or: gunzip -c expdat.dmp.gz > tmppipe &
#Or: gunzip tmppipe &
imp username/password file=tmppipe
You can also use compress/decompress instead of gzip/gunzip (compress/uncompress may not be
installed by default on Linux. If not, run `up2date ncompress' as root to install it.)
This note talks about some less known details and common mistakes.
--------------------------------------------------------------------------------
* Drain the pipe before using it again
Every time you typed the command writing to the pipe, you need to make sure the next command
reading it works correctly, or you may need to `cat' the pipe, kill the writer process and start
over. Here's why. A pipe is an object that a process writes data to and another reads data from. In
order to separate import and export into two steps, a named pipe on the file system has to be
created (so it persists between the two processes run at different times). Because of this
separation, you can't tell if an existing pipe has data in it or not, unless you "examine" it by
reading it. But once you read it, you drain its content. What this means in practice is that, if
you experiment with your whole import/export procedure multiple times, you should cat your pipe
(`cat tmppipe' here) before you repeat your test; or you would leave some left-over data in the
pipe that may cause your current import to throw errors that are hard to explain and not repeatable.
********************************************************************************
"idempotent" means you can do something multiple times and you get the same
result. Now you heard one case where reading (`cat') a file is no longer
idempotent. If you're asked querying what table or view in Oracle one time
removes its content, i.e., it's not idempotent, x$ksmlru is the answer.
I wonder if that mysterious view is implemented as a pipe.
********************************************************************************
--------------------------------------------------------------------------------
* Pipes, compression, and multiple dump files
It *is* doable to export to multiple pipes, each feeding to compress or gzip. You need to prepare
enough pipes beforehand and pass them to exp; being prompted for more filenames is not pretty in
this case.
mkfifo p1; mkfifo p2; ...
exp username/password file=(p1,p2,...) filesize=100000000 &
cat p1 | gzip > p1.dmp.gz
cat p2 | gzip > p2.dmp.gz
...
When I used compress/uncompress to export/import a 400GB schema with many files, I got "read error
onstdin: Input/output error" ("onstdin" has no space in it), or "Broken pipe" multiple times, or
IMP-00048: mismatched file header
IMP-00027: failed to rollback partial import of previous table
IMP-00003: ORACLE error 1013 encountered
ORA-01013: user requested cancel of current operation
IMP-00000: Import terminated unsuccessfully
even though I did NOT manually cancel anything, or
IMP-00003: ORACLE error 3120 encountered
ORA-03120: two-task conversion routine: integer overflow
for no apparent reason. One time, compress threw this rare error probably because the file is too
large:
$ compress -c tmppipe > bigdump.dmp.Z
internal error: hashtable exceeded - hsize = 5003
hshift = 4, 15
maxbits = 16
n_bits = 13
maxcode = 8191
Generally, gzip is much more stable than compress. But at last, the successful export/import was
done with no compression, using the method outlined below. 400GB took about 24 hours.
--------------------------------------------------------------------------------
* Export/import across hosts
If possible, take the simplest approach. So if you need to export data from DB1 on hostA and import
into DB2 on hostB, first consider doing it on one host and use SQL*Net (Oracle Net) to do the
import (or export). For instance, on hostA:
exp username/password file=tmppipe &
imp username/password@DB2 file=tmppipe
But due to version compatibility, it's possible either the source or target database doesn't like
the other party's import or export utility. There're three solutions. One is to temporarily change
the export related views in SYS schema on one side to match that on the other side. The second
approach is to use NFS (make sure you have big rsize and wsize with `nfsstat -m'). Here we talk
about export/import across hosts.
(Ref: Joel Patterson's message at )
On hostA (assume tmppipe is already created):
#cat the pipe and ^C. Must do this from second time on to ensure a clean pipe
cat tmppipe
#Launch exp and let it block. Supply other params as needed.
exp username/password file=tmppipe &
#Assuming tmppipe on hostB already exists; specify path on hostB if not at user's $HOME; can choose
#any bs for dd but 1M is pretty good; rsh is preferred over ssh
cat tmppipe | rsh hostB dd bs=1024k ">>" tmppipe
On hostB (tmppipe already exists on hostB):
#Supply other params as needed. Can put it in background (append &).
imp username/password file=tmppipe
The most important is the `cat ... | rsh ...' command on hostA. Make sure host equivalence without
password prompt is already set up (~/.rhosts on hostB for rsh and ~/.ssh/authorized_keys etc. for
ssh if your company disallows rsh). The dd command run on hostB is a way to take the stdin coming
from the pipe, the vertical bar, not the named pipe tmppipe. (If Oracle's imp utility could accept
input from stdin instead of as an argument following file=, we could avoid the extra dd command.)
dd is one of many UNIX commands that can take input from stdin. cpio can too, as well as gzip and
compress, if compression is desired. In case of compression, `cat' on hostA should be followed by
compression and dd by uncompression, e.g., cat tmppipe | gzip | rsh hostB gunzip ">>" tmppipe.
(Linux version of gzip or compress can't directly take a pipe as an argument, hence cat tmppipe |
gzip.)
--------------------------------------------------------------------------------
* Tips about large export/import
Use screen or nohup. Note that nohup doesn't work across pipes by default. So try
nohup cat ... | rsh hostB nohup dd ...
or read
But neither approach is better than to find a Windows box that has good uptime (e.g. a production
server), run putty on it to your UNIX/Linux box. If your windows terminal session crashes, the
putty process on the Windows box still runs.
Specify "resumable=yes" for exp and imp to avoid space related errors and possibly
resumable_timeout if you can't make it in 2 hours. It's not a cure for ORA-1555 so make other
arrangements as needed: reduce DML, enlarge undo space, stop non-essential jobs[note],... and
possibly change the tablespace to read only.
Alter big indexes parallel before export, so import builds them in parallel. Remember to alter
them to noparallel on both source and target. Create a logon trigger for the importing user on
the target to include the following:
grant alter session to impuser; --if not have it yet
create trigger impuser.tmptrig
after logon on impuser.schema
begin
execute immediate 'alter session set workarea_size_policy = manual';
execute immediate 'alter session set sort_area_size = 100000000';
execute immediate 'alter session set db_file_multiblock_read_count = 128';
end;
/
Drop trigger immediately after import starts.
If you didn't and the import is already going, you can still logon with sqlplus and create
the indexes (with the above favorable settings). Once the import comes along to create index,
it fails with IMP-15 (following statement failed because the object already exists), pointing
to CREATE INDEX. Instead of abort, the imp process continues to the next index. This works
even without ignore=y.
_______________
[note] Either use this trick to set another user's dbms_job to broken (dbms_scheduler_job doesn't
need this trick):
select * from dba_jobs; -- get job number
create procedure theuser.tmp as begin dbms_job.broken(&job#,true); end;
/
exec theuser.tmp
commit;
select * from dba_jobs; -- verify in case the broken procedure was run when the job was running
drop procedure theuser.tmp;
Or as sanjeevorcleATgmailDOTcom recommends, use the undocumented dbms_ijob.broken procedure.
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/79686/viewspace-1033311/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- 老外總結的14條Go介面最佳實踐,有些不一樣Go
- exp&imp的使用方法
- 如何能被Steam推薦?老外研究672款遊戲得出結果遊戲
- 老外的一份滲透測試報告測試報告
- 我的總結
- awk的總結
- Fragment的總結Fragment
- javaSE總結(轉+總結)Java
- JXCategoryView的使用總結GoView
- cmake的使用總結
- MySQL的Explain總結MySqlAI
- 多型的總結多型
- ssh的小總結
- LocalDateTime的方法總結LDA
- 近半年的總結
- iconfonts使用的總結
- ListenalbeFuture的使用總結
- 「比賽總結」AT ABC 358 總結
- 總結
- this總結
- 總結?
- 01的token的年度總結
- 2024.8.17及以前的總結
- XINDOO的2023年總結
- SparkStreaming 的使用與總結Spark
- Spark的Shuffle總結分析Spark
- MySQL的日誌總結MySql
- 死月的二零二一總結
- 最近使用 gin 的總結
- MongoDB的學習總結MongoDB
- Handler_read_*的總結
- std::async的使用總結
- 《慾望的演化》總結
- 死月的二零二零總結
- Pygame的簡單總結GAM
- 《沉默的大多數》總結
- JavaScript中this指向的總結JavaScript
- docker的學習總結Docker
- SpringBoot的個人總結Spring Boot