Changing NLS_CHARACTERSET to AL32UTF8/UTF8 (Unicode) in 8i/9i/10g/11g_260192.1
Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) in 8i, 9i , 10g and 11g (Doc ID 260192.1)
In this Document
Applies to:Oracle Database - Enterprise Edition - Version 8.0.3.0 to 11.2.0.4 [Release 8.0.3 to 11.2]Oracle Database - Standard Edition - Version 8.0.3.0 to 11.2.0.4 [Release 8.0.3 to 11.2] Information in this document applies to any platform. Purpose
The majority of SR's logged about "csalter not working" or "characterset change does not work" are simply due the steps in this note NOT been followed.
From Oracle 12c onwards the DMU will be the only tool available to migrate to Unicode see Note 1272374.1 The Database Migration Assistant for Unicode (DMU) Tool.
Note that in order to use an Unicode (AL32UTF8 / UTF8) database you need to make sure your application supports using an Unicode database. This is not something Oracle database support can "check" or "confirm".
Please consult the application documentation or ask the application vendor / support team if your application is certified to work with AL32UTF8 or UTF8 as NLS_CHARACTERSET. For further implications on clients and application level when going to AL32UTF8 please see Note 788156.1 AL32UTF8 / UTF8 (Unicode) Database Character Set Implications. It's strongly recommended to:
The note can be used to go from any NLS_CHARACTERSET to AL32UTF8 / UTF8. ( which also means it can be used to go from UTF8 to AL32UTF8 (or inverse) ).
The note is written using AL32UTF8, to use this note to go to an other characterset (for example UTF8) simply replace "AL32UTF8" with "UTF8" in the CSSCAN TOCHAR and for 9i and lower in the alter database character set command. This "flow" can also be used to go from any single byte characterset (like US7ASCII, WE8DEC) to any other Multi byte characterset (ZHS16GBK, ZHT16MSWIN950, ZHT16HKSCS, ZHT16HKSCS31,KO16MSWIN949, JA16SJIS ...), simply substitute AL32UTF8 with the xx16xxxx target characterset. But in that case going to AL32UTF8 would be simply be a far better idea. Note 333489.1 Choosing a database character set means choosing Unicode.
Scope
Any DBA changing the current NLS_CHARACTERSET to AL32UTF8 / UTF8 or an other multibyte characterset.
Sqlplus / as sysdba
SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter='NLS_CHARACTERSET' / The NLS_CHARACTERSET is defining the characterset of the CHAR, VARCHAR2, LONG and CLOB datatypes. Details1) General remarks on going to AL32UTF8
This note documents the step to safely change to Unicode by the usage of
For migration to AL32UTF8 (and the deprecated UTF8), there is a new tool available called the Database Migration Assistant for Unicode (DMU). The DMU is a unique next-generation migration tool providing an end-to-end solution for migrating your databases from legacy encodings to Unicode. DMU's intuitive user-interface greatly simplifies the migration process and lessens the need for character set migration expertise by guiding the DBA through the entire migration process as well as automating many of the migration tasks.
The DMU tool is supported against Oracle 11.2.0.3 and higher and selected older versions and platform combinations. For more information please see Note 1272374.1 The Database Migration Assistant for Unicode (DMU) Tool and the DMU pages on OTN. From Oracle 12c onwards, DMU will be the only tool available to migrate to Unicode. Note that the DMU tool and csscan/csalter are NOT related, they are different tools to go to the same end result ( = change the NLS_CHARACTERSET to UTF8 or AL32UTF8). 1.a) Prerequisites:
In this note the Csscan tool is used. Please install this first 1.b) When changing an Oracle Applications Database or Peoplesoft database:
The steps in this note are NOT supported for an Oracle applications or PeopleSoft database.
For an Oracle PeopleSoft database please see the following note note 703689.1 Converting PeopleSoft Systems to Unicode Databases 1.c) When to use (full) export / import and when to use Alter Database Character Set / Csalter?
With "Full exp/imp" is meant to take an export and import the complete dataset into a NEW AL32UTF8 database (this may be an other Oracle version or OS platform).
The end result of both way's of doing the change is however the same. Oracle support cannot say what method is the better option for you, this depends on many factors. 1.d) When using Expdp/Impdp (DataPump)
Do NOT use Expdp/Impdp when going to (AL32)UTF8 or an other multibyte characterset on ALL 10g versions lower then 10.2.0.4 (including 10.1.0.5). Also 11.1.0.6 is affected. 1.e) Using Alter Database Character Set on 9iFor 9i systems please make sure you are at least on Patchset 9.2.0.4, see Note 250802.1 Changing character set takes a very long time and uses lots of rollback space 1.f) What about Physical / Logical Standby databases?
If your system has a Logical Standby database this Logical standby needs to be re-created from the "main" database after the characterset conversion, there is no supported way to alter the standby together with the main database. 1.G) How to test the steps in this note upfront?To properly test the *database* changes (= the steps in this note) you will need to do the steps:
2) Check the source database for unneeded or problematic objects:
tip for (10g and up): remove first all objects in the recyclebin
Sqlplus / as sysdba
If there are objects in the recyclebin then performSELECT OWNER, ORIGINAL_NAME, OBJECT_NAME, TYPE FROM dba_recyclebin ORDER BY 1,2 /
Sqlplus / as sysdba
This will remove unneeded objects and avoid confusion in following steps. It might be a good idea to simply disable the recyclebin until the change is done by setting the recyclebin='OFF' parameter
PURGE DBA_RECYCLEBIN / 2.a) Invalid objects.
Sqlplus / as sysdba
SELECT owner, object_name, object_type, status FROM dba_objects WHERE status ='INVALID' / If there are any invalid objects run utlrp.sql
SQL> @?/rdbms/admin/utlrp.sql
If there are any left after running utlrp.sql then please manually resolve / drop the invalid objects. 2.b) Orphaned Datapump master tables (10g and up)
Sqlplus / as sysdba
SELECT o.status, o.object_id, o.object_type, o.owner ||'.' ||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner =j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2 / See Note 336014.1 How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? 2.c) Unneeded sample schema/users.
The 'HR', 'OE', 'SH', 'PM', 'IX', 'BI' and 'SCOTT' users are sample schema users. There is no point in having these sample schema in a production system. If a sample schema exist we suggest to remove it. 2.d) Leftover Temporary tables using CHAR semantics - " ORA-14450: attempt to access a transactional temp table already in use" during the change to AL32UTF8
Sqlplus / as sysdba
SELECT C.owner ||'.' || C.table_name ||'.' || C.column_name ||' (' || C.data_type ||' ' || C.char_length ||' CHAR)' FROM all_tab_columns C WHERE C.char_used = 'C' AND C.table_name IN (SELECT table_name FROM dba_tables WHERE TEMPORARY='Y' ) AND C.data_type IN ('VARCHAR2', 'CHAR') ORDER BY 1 / If this gives rows back then the change to AL32UTF8 MAY fail with "ERROR at line 1:ORA-00604: error occurred at recursive SQL level 1 ORA-14450: attempt to access a transactional temp table already in use."
If you have tables listed by above select it's good idea to confirm the application will recreate them if needed and drop them now (or if the db is still in use now, do this just before the final Csscan run , point 7 in this note). 2.e) Make sure your database is in good health.
It might be a good idea to run Note 456468.1 Identify Data Dictionary Inconsistency and NOTE 136697.1 "hcheck8i.sql" script to check for known problems in Oracle8i,Oracle9i, and Oracle10g 3) Check the Source database for "Lossy" (invalid code points in the current source character set).Run Csscan with the following syntax:
$ csscan \"sys/
* Always run Csscan connecting with a 'sysdba' connection/user, do not use "system" or "csmig" user.
conn / AS sysdba
SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter='NLS_CHARACTERSET' /
* The TOCHAR= Once Csscan is actually scanning the tables v$session_longops can be used to see the progress of scans of big tables:
Sqlplus / as sysdba
SET pages 1000 SELECT target, TO_CHAR(start_time,'HH24:MI:SS - DD-MM-YY'), time_remaining, sofar, totalwork, sid, serial#, opname FROM v$session_longops WHERE sid IN (SELECT sid FROM v$session WHERE upper(program) LIKE 'CSSCAN%' ) AND sofar < totalwork ORDER BY start_time / Csscan will create 3 files :
This Csscan is to check if all data is stored correctly in the current character set. Because the TOCHAR and FROMCHAR character sets as the same there cannot be any "Convertible" or "Truncation" data reported in dbcheck.txt.
One exception is "binary" or "Encrypted data" stored in CHAR, VARCHAR2 or CLOB, this is not supported. The ONLY supported data types to store "raw" binary data (like PDF , doc, docx, jpeg, png , etc files) or encrypted data like hashed/encrypted passwords are LONG RAW or BLOB.
If you want to store binary data (like PDF , doc, docx, jpeg, png , etc files ) or encrypted data like hashed/encrypted passwords in CHAR, VARCHAR2, LONG or CLOB datatype than this must be converted to a "characterset safe" representation like base64 before doing any change of the NLS_CHARACTERSET. See Note 1297507.1 Problems with (Importing) Encrypted Data After Character Set Change Using Other NLS_CHARACTERSET Database or Upgrading the (client) Oracle Version 3.a) How to "rescue" lossy data?
The first thing to do is try to find out what the actual encoding is of the "Lossy" data, this is documented in Note 444701.1 Csscan output explained. sections "C.2) The source database has data (codes) that are NOT defined in the source characterset." and "C.3) The limit of what Csscan can detect when dealing with lossy, an example:"
The reason is explained in Note 252352.1 Euro Symbol Turns up as Upside-Down Questionmark. The flow to do this is found in Note 555823.1 Changing US7ASCII or WE8ISO8859P1 to WE8MSWIN1252
When preparing a test environment to debug this you can use 2 things:
note: If you use this note to go from AL32UTF8 to UTF8 (or inverse) and you have lossy then in most cases this means there is non-UTF8 encoded data stored , this need to be checked.
This select will give all the lossy objects found in the last Cssan run:
Note that when using the csscan SUPPRESS parameter this select may give incomplete results (not all tables).
Sqlplus / as sysdba
SELECT DISTINCT z.owner_name || '.' || z.table_name || '(' || z.column_name || ') - ' || z.column_type || ' ' LossyColumns FROM csmig.csmv$errors z WHERE z.error_type ='DATA_LOSS' ORDER BY LossyColumns /
Lossy in Data Dictionary objects
When using Csalter/Alter Database Character Set: Most "Lossy" in the Data Dictionary objects will be corrected by correcting the database as a whole, if "correcting" the database still gives some "lossy" in Data Dictionary objects then see Note 258904.1 Solving Convertible or Lossy data in Data Dictionary objects reported by Csscan when changing the NLS_CHARACTERSET . For example one common thing seen is "Lossy" found only in SYS.SOURCE$, most of the time this means some package source code contain illegal codes/bad data. You can use the selects found in Note 291858.1 "SYS.SOURCE$ marked as having Convertible or Lossy data in Csscan output" to find what objects are affected. Note that you CANNOT "fix" tables like SYS.SOURCE$ itself, you need to recreate the objects who's text is stored in SYS.SOURCE$. Do NOT truncate or export Data Dictionary objects itself unless this is said to be possible in Note 258904.1 . When using full export/import into a new AL32UTF8 database: When using export/import to a new database then "Lossy" in Data Dictionary objects is only relevant when it concerns "Application data". The thing to check for is to see if there is no "lossy" in tables like SYS.SOURCE$ (package source code) / SYS.COM$ (comments on objects) / SYS.VIEW$ (view definitions) / SYS.COL$ (column names) or SYS.TRIGGER$ (triggers). The reason beeing is simply that these Data Dictionary objects objects contain information about user objects or pl/sql code. If you have "convertible" there that's not a an issue. For most conversions , if there is "lossy" it will be in SYS.SOURCE$. 3.b) Can I solve "lossy" user/application data in an other way?yes, it is possible to:
4) Check for "Convertible" and "Truncation" data when going to AL32UTF8If there was no "Lossy" in step 3 or you corrected the NLS_CHARACTERSET and started over then run csscan with the following syntax:
$ csscan \"sys/
Note the usage of CAPTURE=Y, using CAPTURE=N (the default) will speed up csscan process itself and use far less space in the CSMIG tablespace but the select in point 6a) will then NOT list the "Convertible" User data.
The .txt file will always list the convertible user data regardless of the setting of CAPTURE. For very big databases it might be an idea to run csscan first using CAPTURE=N as first scan to have an idea about the dataset and then, if needed, do table or user level scans to know what exact rows are convertible. If there is a lot of "Convertible" Application data in your database then the creation of the .err file will take considerable amount of time when using CAPTURE=Y. When creating the .err file csscan will fetch the first 30 bytes of each logged row in the .err file together with the rowid and why it's logged Csscan will do this for * "Truncation" and "Lossy" rows and "Convertible data dictionary" data if CAPTURE=N * ALL "Convertible" (application data also) , "Truncation" and "Lossy" rows if CAPTURE=Y Csscan will create 3 files :
When going to UTF8 or AL32UTF8 there should normally be NO entries under "Lossy" in toutf8.txt, because they should have been filtered out in step 3), if there is "Lossy" data then make sure you accept this data will become ? and treat it as "convertible"
note: if you use this note not to go to UTF8 or AL32UTF8 but to go any other Multi byte characterset (ZHS16GBK, ZHT16MSWIN950, ZHT16HKSCS, ZHT16HKSCS31,KO16MSWIN949, JA16SJIS ...), then there CAN be "Lossy". This is data not known in the new characaterset and this then needs to be removed before going further or exp/imp needs to be used for all tables that have "lossy" - exp/imp will then remove this lossy.
If there are in the txt file:
To have an overview of the output and what it means please see Note 444701.1 Csscan output explained
Note: When using export/import and there is only need to export and import certain schema's into a new/other AL32UTF8 database it is also possible to use csscan only on the actual schema user or tables who will be exported. This will reduce the needed time for doing the csscan run by only checking the actual affected dataset.
Syntax examples are found in note 1297961.1 ORA-01401 / ORA-12899 While Importing Data In An AL32UTF8 / UTF8 (Unicode) Or Other Multibyte NLS_CHARACTERSET Database. To avoid confusion: when using Csalter you NEED to use the FULL=Y syntax in point 8) as last scan before running csalter. 5) Dealing with "Truncation" data.
As explained in Note 788156.1, characters may use more BYTES in AL32UTF8 then in the source characterset. Truncation data means this row won't fit in the current column definition once converted to AL32UTF8.
Note: sometimes there is also a list of tables at then end of the toutf8.txt under the header [Truncation Due To Character Semantics] , this is not the same as the truncation this section discusses.
When using Csalter/Alter database and/or (full) export/import the objects under the [Trunation Due To Character Semantics] header can be simply ignored. (for the good order, only under this header.....)
Truncation issues will always require some work, there are a number of ways to deal with them: 5a) or shorten the data before export
A script that gives an idea on how to do this, based on the csscan result of the LAST run (Csscan will only keep ONE result set ) , is found in Note 1231922.1 Script to Solve Truncation in Csscan output by Shortening Data Before Conversion 5b) or adapt the columns to fit the expansion of the dataHow much the data will expand can be seen using:
conn / as sysdba
SET serveroutput ON size 200000 DECLARE v_length_trunc NUMBER; v_Stmt1 VARCHAR2(6000 BYTE); BEGIN DBMS_OUTPUT.PUT_LINE('output is: '); DBMS_OUTPUT.PUT_LINE('owner - table_name - column_name - column_type - max expansion in Bytes.'); DBMS_OUTPUT.PUT_LINE('.........................................................................'); FOR rec IN (select u.owner_name, u.table_name, u.column_name , u.column_type, u.max_post_convert_size FROM csmig.csmv$columns u WHERE u.exceed_size_rows > to_number('0') ORDER BY u.owner_name, u.table_name,u.column_name) LOOP IF rec.max_post_convert_size <= '2000' and rec.column_type = 'CHAR' THEN DBMS_OUTPUT.PUT_LINE(rec.owner_name ||'.'|| rec.table_name||' ('|| rec.column_name ||') - '|| rec.column_type ||' - '|| rec.max_post_convert_size || ' Bytes .'); END IF; IF rec.max_post_convert_size > '2000' and rec.column_type = 'CHAR' THEN dbms_output.put_line('! Warning !'); dbms_output.put_line('Data in ' || rec.owner_name ||'.'|| rec.table_name ||'.'|| rec.column_name || ' will expand OVER the datatype limit of CHAR !'); IF rec.max_post_convert_size > '4000' THEN dbms_output.put_line('The ' ||rec. column_name ||' CHAR column NEED to change to CLOB to solve the truncation!'); dbms_output.put_line('The ' ||rec. column_name ||' column data size will become max ' || rec.max_post_convert_size || ' Bytes .'); END IF; IF rec.max_post_convert_size <= '4000' THEN v_Stmt1 := 'select max(length(rtrim("'|| Rec.column_name || '"))) from "'|| Rec.owner_name || '"."' || Rec.table_name ||'" '; EXECUTE immediate v_Stmt1 INTO v_length_trunc; dbms_output.put_line('The ' ||rec. column_name ||' CHAR column NEED to change to VARCHAR2 to solve the truncation!'); dbms_output.put_line('The new ' ||rec. column_name ||' VARCHAR2 column size NEED to be at least VARCHAR2 ' || rec.max_post_convert_size || ' BYTE .'); dbms_output.put_line('or when using CHAR semantics be at least VARCHAR2 ' || v_length_trunc || ' CHAR .'); END IF; dbms_output.put_line('Using CHAR semantics for this column without changing datatype will NOT solve the truncation!'); END IF; IF rec.max_post_convert_size <= '4000' and rec.column_type = 'VARCHAR2' THEN DBMS_OUTPUT.PUT_LINE(rec.owner_name ||'.'|| rec.table_name||' ('|| rec.column_name ||') - '|| rec.column_type ||' - '|| rec.max_post_convert_size || ' Bytes .'); END IF; IF rec.max_post_convert_size > '4000' and rec.column_type = 'VARCHAR2' THEN dbms_output.put_line('! Warning !'); dbms_output.put_line('Data in ' || rec.owner_name ||'.'|| rec.table_name ||'.'|| rec. column_name || ' will expand OVER the datatype limit of VARCHAR2 !'); dbms_output.put_line('The ' ||rec. column_name ||' VARCHAR2 column NEED to change to CLOB to solve the truncation!'); dbms_output.put_line('The ' ||rec. column_name ||' column data size will become max ' || rec.max_post_convert_size || ' Bytes .'); dbms_output.put_line('Using CHAR semantics for this column without changing datatype will NOT solve the truncation!'); END IF; END LOOP; END; / sample output:
output is:
owner - table_name - column_name - column_type - max expansion in Bytes. ......................................................................... SCOTT.TEST1 (COL1) - VARCHAR2 - 177 Bytes . SCOTT.TEST1 (COL2) - CHAR - 177 Bytes . SCOTT.TEST2 (COL1) - VARCHAR2 - 2997 Bytes . ! Warning ! Data in SCOTT.TEST2.COL2 will expand OVER the datatype limit of VARCHAR2 ! The COL2 VARCHAR2 column NEED to change to CLOB to solve the truncation! The COL2 column data size will become max 5997 Bytes . Using CHAR semantics for this column without changing datatype will NOT solve the truncation! ! Warning ! Data in SCOTT.TEST3.COL1 will expand OVER the datatype limit of CHAR ! The COL1 CHAR column NEED to change to VARCHAR2 to solve the truncation! The new COL1 VARCHAR2 column size NEED to be at least VARCHAR2 2997 BYTE . or when using CHAR semantics be at least VARCHAR2 999 CHAR . Using CHAR semantics for this column without changing datatype will NOT solve the truncation! ! Warning ! Data in SCOTT.TEST3.COL2 will expand OVER the datatype limit of CHAR ! The COL2 CHAR column NEED to change to CLOB to solve the truncation! The COL2 column data size will become max 5997 Bytes . Using CHAR semantics for this column without changing datatype will NOT solve the truncation! PL/SQL procedure successfully completed. Note: Truncation in Data Dictionary objects is rare and will be solved by using the steps for "Convertible" Data Dictionary data.
For most columns changing the column datatype "size" will be enough (This are the columns reported WITHOUT any warning in above output).
The above select will not adapt/change anyhting , it lists the minimum amaount of BYTES needed to solve the "Truncation"
How to use CHAR semantics is discussed in note 144808.1 Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS) and there is Note 313175.1 Changing columns to CHAR length semantics.
For some columns it might be needed to change datatype , any such columns will be reported with a warning in above output.
Note: the need for a datatype change is not often seen, but is possible. If the expansion in BYTES is bigger then the max datalength of the datatype , which is 2000 bytes for CHAR and 4000 bytes for VARCHAR2, then using CHAR semantics will not help
* for CHAR datatype columns where data becomes more than 2000 bytes but less than 4000 bytes after conversion to AL32UTF8 you need to pre create / adapt the CHAR column as VARCHAR2 before import OR change the CHAR column to VARCHAR2 before export . 6) Dealing with "Convertible" data.here you choose depending on what is required/ you want to do, 2 options exist 1) When you want to use Csalter/Alter Database Character Set and change the current database go to point 6a)
2) When you want to use (FULL or user level) export/import into a new or existing AL32UTF8 db then take the (full or user level) export now and goto point 12).
When using export/import using the "old" Exp/Imp tools the NLS_LANG setting is simply AMERICAN_AMERICA.
Expdp/imdpd (datapump) does not use the NLS_LANG for data conversion. Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions <source>
The steps from 6a) until 11) are NOT needed when you go to an other/new AL32UTF8 database using export/import. 6.a) "Convertible" Application Data:
When using Csalter/Alter Database Character Set all User / Application Data "Convertible" data needs to be exported and truncated/deleted.
Again, this is often misunderstood:
* if the "Convertible" data under [Application Data Conversion Summary] is NOT exported then Csalter will NOT run and fail with Unrecognized convertible date found in scanner result . * if the "Convertible" data under [Application Data Conversion Summary] is NOT exported then when Using "Alter Database Character Set" this data will be CORRUPTED. It is MANDATORY to export and truncate/delete ALL "Convertible" User / Application Data data .
10g and 11g: This will give a list of all "Application Data" tables who have "Convertible" (or also Truncation) and need to be exported and truncated/deleted before Csalter can be used:
Sqlplus / as sysdba
SELECT DISTINCT z.owner_name || '.' || z.table_name || '(' || z.column_name || ') - ' || z.column_type || ' - ' || z.error_type || ' ' UserColumns FROM csmig.csmv$errors z WHERE z.owner_name NOT IN (SELECT DISTINCT username FROM csmig.csm$dictusers ) ORDER BY UserColumns /
To check for constraint definitions on the tables before exporting and truncating them Note 1019930.6 Script: To report Table Constraints or Note 779129.1 HowTo show all recursive constraints leading to a table can be used.
When using export/import using the "old" Exp/Imp tools the NLS_LANG setting is simply AMERICAN_AMERICA.
Expdp/imdpd (datapump) does not use the NLS_LANG for data conversion. Note 227332.1 NLS considerations in Import/Export - Frequently Asked Questions <source> 6.b) "Convertible" data in Data Dictionary objects:
The main challenge when using Csalter/Alter Database Character Set is , besides exporting and truncating all "Convertible" User/Application Data , most of the time "Convertible" data in Data Dictionary objects.
10g and 11g: If the next select gives NO rows there is NO action to take on Data Dictionary objects, even if there is "Convertible" CLOB listed under the "[Data Dictionary Conversion Summary]" header in the toutf8.txt
If it does give rows, see Note 258904.1 Solving Convertible or Lossy data in Data Dictionary objects reported by Csscan when changing the NLS_CHARACTERSET
Sqlplus / as sysdba
SELECT DISTINCT z.owner_name || '.' || z.table_name || '(' || z.column_name || ') - ' || z.column_type || ' - ' || z.error_type || ' ' NotHandledDataDictColumns FROM csmig.csmv$errors z WHERE z.owner_name IN (SELECT DISTINCT username FROM csmig.csm$dictusers ) minus SELECT DISTINCT z.owner_name || '.' || z.table_name || '(' || z.column_name || ') - ' || z.column_type || ' - ' || z.error_type || ' ' DataDictConvCLob FROM csmig.csmv$errors z WHERE z.error_type ='CONVERTIBLE' AND z.column_type = 'CLOB' AND z.owner_name IN (SELECT DISTINCT username FROM csmig.csm$dictusers ) ORDER BY NotHandledDataDictColumns /
If you do log a SR about what to do please DO provide the output of above select.
Do NOT truncate or export Data Dictionary objects itself unless this is said to be possible in Note 258904.1 Solving Convertible or Lossy data in Data Dictionary objects reported by Csscan when changing the NLS_CHARACTERSET
This note my be useful to identify Oracle created users in your database Note 160861.1 Oracle Created Database Users: Password, Usage and Files. This note may also be useful: Note 472937.1 Information On Installed Database Components and Schema's. 7) Before using Csalter / Alter Database Character Set check the database for:7.a) Partitions using CHAR semantics - ORA-14265: data type or length of a table subpartitioning column may not be changed" or " ORA-14060: data type or length of a table partitioning column may not be changed" during the change to AL32UTF8
Sqlplus / as sysdba
SELECT c.owner, c.table_name, c.column_name, c.data_type, c.char_length FROM all_tab_columns c, all_tables t WHERE c.owner = t.owner AND c.table_name = t.table_name AND c.char_used = 'C' AND t.partitioned ='YES' AND c.table_name NOT IN (SELECT table_name FROM all_external_tables ) AND c.data_type IN ('VARCHAR2', 'CHAR') ORDER BY 1,2 /
If this select gives rows back then the change to AL32UTF8 will fail with "ORA-14265: data type or length of a table subpartitioning column may not be changed" or " ORA-14060: data type or length of a table partitioning column may not be changed" if those columns using CHAR semantics are used as partitioning key or subpartition key since the change to AL32UTF8 will adapt the actual byte length of CHAR semantic columns ( data_lengh in all_tab_columns). 7.b) Function , Domain or Joined indexes on CHAR semantics columns.7.b.1) Functional or domain indexes on columns using CHAR semantics - "ORA-30556: functional index is defined on the column to be modified" or with "ORA-02262: ORA-904 occurs while type-checking column default value expression" during the change to AL32UTF8
Sqlplus / as sysdba
SELECT owner, index_name, table_owner, table_name, status, index_type FROM dba_indexes WHERE index_type IN ('DOMAIN','FUNCTION-BASED NORMAL','FUNCTION-BASED NORMAL/REV','FUNCTION-BASED BITMAP') AND table_name IN (SELECT UNIQUE (x.table_name) FROM dba_tab_columns x WHERE x.char_used ='C' AND x.data_type IN ('CHAR','VARCHAR2')) ORDER BY 1,2 /
If this gives rows back then the change to AL32UTF8 will fail with "ORA-30556: functional index is defined on the column to be modified" or with "ORA-02262: ORA-904 occurs while type-checking column default value expression" . If there are functional or domain indexes on columns using CHAR semantics the index need to be dropped and recreated after the change.
Sqlplus / as sysdba
7.b.2) Join indexes on columns using CHAR semantics - "ORA-54028: cannot change the HIDDEN/VISIBLE property of a virtual column" during the change to AL32UTF8
SET LONG 2000000 SET pagesize 0 EXECUTE dbms_metadata.set_transform_param(dbms_metadata.session_transform,'STORAGE',FALSE); SELECT dbms_metadata.get_ddl('INDEX',index_name,owner) FROM dba_indexes WHERE index_type IN ('DOMAIN','FUNCTION-BASED NORMAL','FUNCTION-BASED NORMAL/REV','FUNCTION-BASED BITMAP') AND table_name IN (SELECT UNIQUE (x.table_name) FROM dba_tab_columns x WHERE x.char_used ='C' AND x.data_type IN ('CHAR','VARCHAR2')); EXECUTE dbms_metadata.set_transform_param(dbms_metadata.session_transform,'DEFAULT')
Sqlplus / as sysdba
SELECT owner, index_name, table_owner, table_name, status, index_type FROM dba_indexes WHERE table_name IN (SELECT UNIQUE (object_name) FROM dba_objects WHERE object_id IN (SELECT UNIQUE obj# FROM sys.col$ WHERE property='8454440' )) AND table_name IN (SELECT UNIQUE (table_name) FROM dba_tab_columns WHERE char_used ='C' AND data_type IN ('CHAR','VARCHAR2')) ORDER BY 1,2 /
If this gives rows back then the change to AL32UTF8 will fail with "ORA-54028: cannot change the HIDDEN/VISIBLE property of a virtual column" .
Sqlplus / as sysdba
SET LONG 2000000 SET pagesize 0 EXECUTE dbms_metadata.set_transform_param(dbms_metadata.session_transform,'STORAGE',FALSE); SELECT dbms_metadata.get_ddl('INDEX',index_name,owner) FROM dba_indexes WHERE table_name IN (SELECT UNIQUE (object_name) FROM dba_objects WHERE object_id IN (SELECT UNIQUE obj# FROM sys.col$ WHERE property='8454440' )) AND table_name IN (SELECT UNIQUE (x.table_name) FROM dba_tab_columns x WHERE x.char_used ='C' AND x.data_type IN ('CHAR','VARCHAR2')); EXECUTE dbms_metadata.set_transform_param(dbms_metadata.session_transform,'DEFAULT')
There is a bug in 11g releases before 11.2.0.2 ( 11.2.0.1, 11.1.0.7 and 11.1.0.6) that causes the "drop index" command to not drop the hidden columns for bitmap join indexes, resulting in ORA-54028 during Csalter , even if the indexes are dropped. 7.c) SYSTIMESTAMP in the DEFAULT value clause for tables using CHAR semantics - " ORA-604 error occurred at recursive SQL level %s , ORA-1866 the datetime class is invalid" during the change to AL32UTF8 .
Sqlplus / as sysdba
SET serveroutput ON BEGIN FOR rec IN (SELECT owner, table_name, column_name, data_default FROM dba_tab_columns WHERE char_used='C' ) loop IF upper(rec.data_default) LIKE '%TIMESTAMP%' THEN dbms_output.put_line(rec.owner ||'.'|| rec.table_name ||'.'|| rec. column_name); END IF; END loop; END; /
If this gives rows back then the change to AL32UTF8 will fail with " ORA-604 error occurred at recursive SQL level %s , ORA-1866 the datetime class is invalid" . 7.d) Clusters using CHAR semantics - "ORA-01447: ALTER TABLE does not operate on clustered columns" during the change to AL32UTF8
Sqlplus / as sysdba
SELECT owner, object_name FROM all_objects WHERE object_type = 'CLUSTER' AND object_name IN (SELECT UNIQUE (table_name) FROM dba_tab_columns WHERE char_used ='C' ) ORDER BY 1,2 / If this gives rows back then the change to AL32UTF8 will fail with "ORA-01447: ALTER TABLE does not operate on clustered columns". Those clusters need to be dropped and recreated after the character set change. 7.e) Unused columns using CHAR semantics - "ORA-00604: error occurred at recursive SQL level 1" together with an "ORA-00904: "SYS_C00002_09031813:50:03$": invalid identifier" during the change to AL32UTF8
Sqlplus / as sysdba
SELECT owner, table_name FROM dba_unused_col_tabs WHERE table_name IN (SELECT UNIQUE (table_name) FROM dba_tab_columns WHERE char_used ='C' ) ORDER BY 1,2 /
If this gives rows back then the change to AL32UTF8 will fail with "ORA-00604: error occurred at recursive SQL level 1" together with an "ORA-00904: "SYS_C00002_09031813:50:03$": invalid identifier". Note that the "SYS_C00002_09031813:50:03$" will change for each column.
ALTER TABLE table_name DROP UNUSED COLUMNS
/ 7.f) Check that you have enough room to run Csalter or to import the "Convertible" data again afterwards.
In 10g and 11g verify at least in toutf8.txt/toutf8fin.txt the "Expansion" column found under [Database Size] and check you have at least 2 times the expansion listed for SYSTEM tablespace free. 7.g) (10g and 11g) Objects in the recyclebin - "ORA-38301 can not perform DDL/DML over objects in Recycle Bin" during the change to AL32UTF8If the recyclebin was not disabled make sure there are no objects in the recyclebin
Sqlplus / as sysdba
SELECT OWNER, ORIGINAL_NAME, OBJECT_NAME, TYPE FROM dba_recyclebin ORDER BY 1,2 / If there are objects in the recyclebin then perform
Sqlplus / as sysdba
PURGE DBA_RECYCLEBIN / This will remove recyclebin objects, otherwise during CSALTER an "ORA-38301 can not perform DDL/DML over objects in Recycle Bin" may be seen during the change to AL32UTF8. 7.h) Check if the compatible parameter is set to your base version
Sqlplus / as sysdba
sho parameter compatible Do not try to migrate for example an 10g database with compatible=9.2 7.i) for Oracle 11.2.0.3 , 11.2.0.2, 11.2.0.1 , 11.1.0.7 and 11.1.0.6 : check for SQL plan baselines and profiles
Sqlplus / as sysdba
select count(*) from dba_sql_plan_baselines; select count(*) from from dba_sql_profiles;
If these selects give a result then apply first Patch 12989073 ORA-31011 ORA-19202 AFTER CHARACTER CONVERSION TO UTF8 8) After any "Lossy" is solved, "Truncation" data is planned to be addressed and/or "Convertible" exported / truncated / addressed and point 7) is ok run Csscan again as final check.
$ csscan \"sys/
8.a) For 8i/9i the Csscan output needs to be "Changeless" for all CHAR, VARCHAR2, CLOB and LONG data (Data Dictionary and User/Application data) ( = BEFORE going to point 10.a) ).In order to use "Alter Database Character Set" you need to see in the toutf8fin.txt file under [Scan Summary] this message::
All character type data in the data dictionary remain the same in the new character set
All character type application data remain the same in the new character set
If this is NOT seen in the toutf8fin.txt then this means something is missed or not all steps in this note are followed. Ignoring this WILL lead to data corruption. 8.b) For 10g and 11g the Csscan output needs to be ( = BEFORE going to point 10.b) )
* "Changeless" for all CHAR VARCHAR2, and LONG data (Data Dictionary and User/Application data )
All character type application data remain the same in the new character set
and under [Data Dictionary Conversion Summary] this message:
The data dictionary can be safely migrated using the CSALTER script
You need to see BOTH, the Csalter message alone is NOT enough.
Note the difference between "All character type application data remain the same in the new character set" and "All character type application data are convertible to the new character set".
If you see "All character type application data are convertible to the new character set" then Csalter will NOT work.
If this is seen, then continue in step 10.b) , point 9) is a short overview of all the steps. 9) Summary of steps needed to use Alter Database Character Set / Csalter:9.a) For 9i and lower:
9.a.1) Export all the "Convertible" User/Application Data data (make sure that the character set part of the NLS_LANG is set to the current database character set during the export session)
In order to use "Alter Database Character Set" you need to see in the toutf8fin.txt file under [Scan Summary] this message::
All character type data in the data dictionary remain the same in the new character set
All character type application data remain the same in the new character set If this is NOT seen in the toutf8fin.txt then this means something is missed or not all steps in this note are followed. Ignoring this WILL lead to datacorruption.
then proceed to step 10.a), otherwise do the same again for the rows you've missed out. 9.b) For 10g and 11g:
9.b.1) Export all the "Convertible" User/Application Data (make sure that the character set part of the NLS_LANG is set to the current database character set during the export session)
When using Csscan in 10g and up the toutf8.txt or toutf8fin.txt need to contain this before doing step 10.b):
The data dictionary can be safely migrated using the CSALTER script
and
All character type application data remain the same in the new character set
If this is NOT seen in the toutf8fin.txt then Csalter will NOT work and this means something is missed or not all steps in this note are followed.
9.b.5) If this is now correct then proceed to step 10.b), otherwise do the same again for the rows you've missed out. 10) Running Csalter ( 10g and 11g) or Alter Database Character Set (8i and 9i)
Perform a backup of the database. Check the backup. Double-check the backup. 10.a) The steps for 8i and 9i ONLY - do NOT use these in 10g or 11g
Do not do steps 10.a.1) and 10.a.2) on a 10g or 11g system.
For 10g and up go to step 10.b). Do NOT use "Alter database character set" in 10g and 11g, using "Alter database character set" to go to UTF8 or AL32UTF8 is NOT supported in 10g and 11g and WILL corrupt Data Dictionary objects when changing to UTF8 or AL32UTF8.
Shutdown the listener and any application that connects locally to the database.
SELECT sid, serial#, username, status, osuser, machine, process, program FROM v$session WHERE username IS NOT NULL;
should only give ONE row - your sqlplus connection.
Sqlplus / as sysdba
sho parameter CLUSTER_DATABASE sho parameter PARALLEL_SERVER
Sqlplus / as sysdba
SPOOL Nswitch.log SHUTDOWN IMMEDIATE; STARTUP MOUNT; ALTER SYSTEM ENABLE RESTRICTED SESSION; ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; ALTER SYSTEM SET AQ_TM_PROCESSES=0; ALTER DATABASE OPEN; -- Do not do these steps on a 10g or 11g system ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8; SHUTDOWN IMMEDIATE; -- You NEED to restart the database before doing ANY operation on this database -- exit this session now do not use the session that did alter database for other operations EXIT -- reconnect to the database and start the database Sqlplus / as sysdba STARTUP; -- in 8i you need to do the startup/shutdown 2 times SHUTDOWN; STARTUP;
An alter database takes typically only a few minutes or less, it depends on the number of columns in the database, not the amount of data. Without the INTERNAL_USE you get a ORA-12712: new character set must be a superset of old character set
10.a.3) Restore the PARALLEL_SERVER (8i) and CLUSTER_DATABASE parameter if necessary and start the database. For RAC start the other instances.
WARNING WARNING WARNING
Do NEVER use "INTERNAL_USE" unless you did follow the guidelines STEP BY STEP here in this note and you have a good idea what you are doing. Do NEVER use "INTERNAL_USE" on a 10g or 11g system to go to UTF8 or AL32UTF8. If this was done then the only recovery is back to backup. Do NEVER use "INTERNAL_USE" to "fix" display problems, but follow Note:179133.1 The correct NLS_LANG in a Windows Environment or Note:264157.1 The correct NLS_LANG setting in Unix Environments If you use the INTERNAL_USE clause on a database where there is data listed as convertible without exporting that data then the data will be corrupted by changing the database character set ! goto point 10.c) , do not do point 10.b) 10.b) The steps for version 10g and 11g
Csalter.plb needs to be used within 7 days after the Csscan run, otherwise you will get a 'The CSSCAN result has expired' message.
Csalter.plb takes no arguments. If the required messages (see point 8.b) in this note) are seen in the csscan output then Csalter will change the characterset to the one specified in the TOCHAR of the last csscan run. In order to run Csalter.plb the last csscan need to be using FULL=Y .
Shutdown the listener and any application that connects locally to the database.
SELECT sid, serial#, username, status, osuser, machine, process, program FROM v$session WHERE username IS NOT NULL;
should only give ONE row - your sqlplus connection. If more then one session is connected Csalter will fail and report "Sorry only one session is allowed to run this script".
If you are using RAC you will need to start the database in single instance with CLUSTER_DATABASE = FALSE
Sqlplus / as sysdba
-- Make sure the CLUSTER_DATABASE parameter is set -- to false or it is not set at all. -- If you are using RAC you will need to start the database in single instance -- with CLUSTER_DATABASE = FALSE sho parameter CLUSTER_DATABASE -- if you are using spfile SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type" FROM sys.v_$parameter WHERE name = 'spfile'; -- note the values for sho parameter job_queue_processes sho parameter aq_tm_processes -- (this is Bug 6005344 fixed in 11g ) -- then do shutdown startup restrict SPOOL Nswitch.log PURGE DBA_RECYCLEBIN / -- next select should only give ONE row - your sqlplus connection -- If more then one session is connected Csalter will fail and report "Sorry only one session is allowed to run this script". SELECT sid, serial#, username, status, osuser, machine, process, program FROM v$session WHERE username IS NOT NULL; -- do this alter system or you might run into "ORA-22839: Direct updates on SYS_NC columns are disallowed" -- This is only needed in 11.1.0.6, fixed in 11.1.0.7, not applicable to 10.2 or lower -- ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT LEVEL 1,FOREVER'; -- then run Csalter.plb: @?/rdbms/admin/csalter.plb -- Csalter will aks confirmation - do not copy paste the whole actions on one time -- sample Csalter output: -- 3 rows created. ... -- This script will update the content of the Oracle Data Dictionary. -- Please ensure you have a full backup before initiating this procedure. -- Would you like to proceed (Y/N)?y -- old 6: if (UPPER('&conf') <> 'Y') then -- New 6: if (UPPER('y') <> 'Y') then -- Checking data validility... -- begin converting system objects -- PL/SQL procedure successfully completed. -- Alter the database character set... -- CSALTER operation completed, please restart database -- PL/SQL procedure successfully completed. ... -- Procedure dropped. -- if you are using spfile then you need to also -- ALTER SYSTEM SET job_queue_processes= -- ALTER SYSTEM SET aq_tm_processes= SHUTDOWN IMMEDIATE; -- You NEED to restart the database before doing ANY operation on this database -- exit this session now. do not use the session where Csalter was runned for other operations. EXIT -- reconnect to the database and start the database Sqlplus / as sysdba STARTUP; and the database will be AL32UTF8.
Note: in 10.1 csalter is asking for "Enter value for 1: ".
-- Would you like to proceed ?(Y/N)?Y
-> simply hit enter.
-- old 5: if (UPPER('&conf') <> 'Y') then -- new 5: if (UPPER('Y') <> 'Y') then -- Enter value for 1: goto 10.c) 10.c) Check the Csalter/alter database output and the alert.log for errors, some Csalter messages do NOT have an ORA- number.
10g and 11g: if you see see messages in the Csalter output like " Unrecognized convertible data found in scanner result " or " Checking or Converting phrase did not finish successfully No database (national) character set will be altered " then you LAST csscan was not "clean" and you missed some steps in this note.
Alter the database character set...
message in the alert.log or during alter database command (8i or 9i) / Csalter like for example:
ORA-41400: Bind character set (31) does not match database character set (873)
Note: the Character set bind may differ (= other number than 31 ): or an other error like
ORA-00604: error occurred at recursive SQL level 1
or ORA-604 signalled during: ALTER DATABASE CHARACTER SET AL32UTF8...
then the NLS_CHARACTERSET is NOT changed (fully) after running Csalter / alter database
Sqlplus / as sysdba
select distinct(nls_charset_name(charsetid)) CHARACTERSET, decode(type#, 1, decode(charsetform, 1, 'VARCHAR2', 2, 'NVARCHAR2','UNKNOWN'), 9, decode(charsetform, 1, 'VARCHAR', 2, 'NCHAR VARYING', 'UNKNOWN'), 96, decode(charsetform, 1, 'CHAR', 2, 'NCHAR', 'UNKNOWN'), 8, decode(charsetform, 1, 'LONG', 'UNKNOWN'), 112, decode(charsetform, 1, 'CLOB', 2, 'NCLOB', 'UNKNOWN')) TYPES_USED_IN from sys.col$ where charsetform in (1,2) and type# in (1, 8, 9, 96, 112) order by CHARACTERSET, TYPES_USED_IN / this should give 7 rows, one for each datatype and 2 charactersets: the NLS_CHARACTERSET and NLS_NCHAR_CHARACTERSET.
-- example of correct output for a AL32UTF8 NLS_CHARACTERSET and AL16UTF16 NLS_NCHAR_CHARACTERSET database:
CHARACTERSET TYPES_USED_IN ----------------- --------------------------------------- AL32UTF8 CHAR AL32UTF8 CLOB AL32UTF8 LONG AL32UTF8 VARCHAR2 AL16UF16 NCHAR AL16UF16 NCLOB AL16UF16 NVARCHAR2
If this select returns more than 7 rows (for example 2 different character sets for VARCHAR2) then there is something wrong and you will have errors in the alert.log during the conversion. 11) (10g and 11g ) Reload the data pump packages after a change to AL32UTF8 in 10g and up.For 10g or up the datapump packages need to be reloaded after a conversion to AL32UTF8. Note 430221.1 - How To Reload Datapump Utility EXPDP/IMPDP
In some cases exp (the original export tool) fails in 10g after changing to AL32UTF8. please see Note 339938.1 Full Export From 10.2.0.1 Aborts With EXP-56 ORA-932 (Inconsistent Datatypes) EXP-0
12) Import the exported data back into the database.Note that if you had in the Csscan done in point 4) ONLY "Changeless" and NO "Convertible" (this is not often seen) then there is no data to import when using Csalter/Alter database. 12.a) When using Csalter/Alter database to go to AL32UTF8 and there was "Truncation" data in the csscan done in point 4:Truncation data is always ALSO "Convertible", it's "Convertible" data that needs action before you can import this again. If there was "Truncation" then typically this is handled by pre-creating the tables using CHAR semantics or enlarged column size in bytes after changing the database to AL32UTF8 using Csalter/Alter database and before starting the import.
Note that simply setting NLS_LENGTH_SEMANTICS=CHAR in the init.ora will NOT work to go to CHAR semantics.
Once the measures for solving the "Truncation" are in place you can then import the "Truncation/Convertible" data. 12.b) When using (Full) export/import to go to a new/other AL32UTF8 database and there was "Truncation" data in the csscan done in point 4:Truncation data is always ALSO "Convertible", it's "Convertible" data that needs action before you can import this again. If there was "Truncation" then typically this is handled by pre-creating the tables using CHAR semantics or enlarged column size in bytes after creating the new AL32UTF8 and before starting the import.
Note that simply setting NLS_LENGTH_SEMANTICS=CHAR in the init.ora will NOT work to go to CHAR semantics.
Once the measures for solving the "Truncation" are in place you can then import the "Truncation/Convertible" data. 12.c) When using Csalter/Alter database to go to AL32UTF8 and there was NO "Truncation" data, only "Convertible" and "Changeless" in the csscan done in point 4:
Set the init.ora parameter BLANK_TRIMMING=TRUE to avoid the problem documented in Note 779526.1 CSSCAN does not detect data truncation for CHAR datatype - ORA-12899 when importing 12.d) When using (full) export/import to go to a new/other AL32UTF8 database and there was NO "Truncation" data, only "Convertible" and "Changeless" in the csscan done in point 4:
Create a new AL32UTF8 database if needed and set the init.ora parameter BLANK_TRIMMING=TRUE to avoid the problem documented in Note 779526.1 CSSCAN does not detect data truncation for CHAR datatype - ORA-12899 when importing 13) Check your data and final things:
Use a correctly configured client or Oracle SQL Developer and verify you data.
Sqlplus / as sysdba
SQL> drop user csmig cascade;
If you did not use CHAR semantics for all CHAR and VARCHAR2 columns and pl/sql variables it might be an idea to consider this.
Sqlplus / as sysdba
SELECT owner, object_name, object_type, status FROM dba_objects WHERE status ='INVALID' / If there are any invalid objects run utlrp.sql SQL> @?/rdbms/admin/utlrp.sql It is also highly recommended to collect new stats for your database after the characterset change.
Sqlplus / as sysdba
-- where x is nr of cpu cores EXEC DBMS_STATS.GATHER_DATABASE_STATS(DEGREE=> 'x') And make sure to have checked Note 788156.1 AL32UTF8 / UTF8 (Unicode) Database Character Set Implications.
ReferencesNOTE:258904.1 - Solving Convertible or Lossy data in Data Dictionary objects reported by Csscan when changing the NLS_CHARACTERSETNOTE:745809.1 - Installing and configuring Csscan in 10g and 11g (Database Character Set Scanner) NOTE:225912.1 - Changing Or Choosing the Database Character Set ( NLS_CHARACTERSET ) NOTE:458122.1 - Installing and Configuring Csscan in 8i and 9i (Database Character Set Scanner) NOTE:444701.1 - Csscan Output Explained NOTE:788156.1 - AL32UTF8 / UTF8 (Unicode) Database Character Set Implications |
![]() |
來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/17252115/viewspace-1345704/,如需轉載,請註明出處,否則將追究法律責任。
相關文章
- C++ UTF8 互轉 UnicodeC++Unicode
- C++ string互轉wstring/Unicode互轉ANSI/Unicode互轉UTF8C++Unicode
- 9i和10g上rman全備的一點差別
- 9i and 10g 透過SQL_ADDRESS 或sql_id查詢執行計劃SQL
- Oracle 9i, 10g, and 11g RAC on Linux所需要的Hangcheck-Timer Module介紹OracleLinuxGC
- webSocket 二進位制傳輸基礎準備-Unicode轉UTF16和UTF8WebUnicode
- UTF8
- unicodeUnicode
- Hacking with UnicodeUnicode
- ZHS16GBK轉換成AL32UTF8
- set names utf8;
- DataStage系列教程 (Slowly Changing Dimension)緩慢變化維AST
- Unicode 與 UTFUnicode
- 【arcmap】 utf8編碼
- utf8 加密與解密加密解密
- Oralce 入門教程:Oracle Database 9i 10g 11g程式設計藝術 深入資料庫體系結構 第2版OracleDatabase程式設計資料庫
- 修改SharePoint上傳檔案大小限制(Changing Maximum Upload Size)
- root使用者操作檔案提示 changing permissions of '***': Operation not permittedMIT
- Unicode的前世今生Unicode
- (1)用encode("utf8")把unicode編碼變成str/(2)python中@property,@x.setter和@x.deleter/(3)MD5加密編碼UnicodePythondelete加密
- 榮耀9i引數與真機圖賞 榮耀9i配置怎麼樣?
- 10g RAC on AIXAI
- Oracle 12C 修改字符集為AL32UTF8研究Oracle
- Unicode編碼解碼Unicode
- JavaScript 字元 Unicode 表示法JavaScript字元Unicode
- 使用ofstream輸出unicodeUnicode
- Unicode編碼介紹Unicode
- Python convert string to unicode numberPythonUnicode
- unicode vs utf-8Unicode
- unicode轉碼工具類Unicode
- oracle 9i資料庫做spaOracle資料庫
- 榮耀9i隱藏劉海設定方法 榮耀9i怎麼隱藏劉海?
- 10G DG SWITCH OVER
- oracle 10g flashback databaseOracle 10gDatabase
- 修改MySQL資料型別報 Changing columns for table XXX 錯的問題MySql資料型別
- Installing Oracle 9i on OELRHEL 4.8 64bitOracle
- ECMAScript 6:更好的 Unicode 支援Unicode
- catalyst支援unicode的重要配置Unicode
- str跟unicode不一樣Unicode