Oracle SQL optimization-2(zt)

zhouwf0726發表於2019-05-27
Oracle SQL optimization(摘自SAP Note 766349)
17. What must be considered in terms of the index design?
For more information, see Note 912620.
18. What analysis tools are available at Oracle level?
For various reasons, an SQL analysis may be useful at database level directly (for example, if no R/3 functions are available in a Java-only environment). Therefore, useful Oracle functions for SQL analysis are described below:
  • Creating an SQL trace
You can use ORADEBUG (Note 613872) or DBMS_SYSTEM (Note 602667) to create an SQL trace of any session.
  • Displaying an Explain plan
With the following command, the run schedule is displayed for all subsequent SQL statements, instead of them being executed:
SET AUTOTRACE TRACEONLY EXPLAIN
You can deactivate this mode with:
SET AUTOTRACE OFF
If you want to create an Explain for a statement with bind variables, the above approach does not work. You can use the following option instead:
EXPLAIN PLAN FOR ;
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
  • Display block accesses of an SQL statement
Among other things, the following command counts the Disk Reads and Buffer Gets of the subsequent SQL statements (without the actual result of the query being returned):
SET AUTOTRACE TRACEONLY
You can deactivate this mode with:
SET AUTOTRACE OFF
  • PLAN statistics (Oracle >=9i)
As of Oracle 9i you can collect detailed PLAN statistics such as processed records, buffer gets, disk reads or required time. Since the collection of these extensive statistics has a significant effect on database performance, this option is normally deactivated. You can use SQLPLUS to activate and evaluate these PLAN statistics on session level for individual tests. To do this, proceed as follows:
    • Activating the PLAN statistics:

      ALTER SESSION SET STATISTICS_LEVEL=ALL;
    • Executing the relevant SQL statements
    • Query from V$SQL_PLAN_STATISTICS_ALL

      SELECT
      SUBSTR(LPAD(' ', PS.DEPTH) || OPERATION || ' ' ||
      OPTIONS || DECODE(OBJECT_NAME, NULL, NULL, ' (' ||
      OBJECT_NAME || ')'), 1, 40) ACTION,
      PS.COST,
      PS.CARDINALITY,
      PS.LAST_OUTPUT_ROWS "ROWS_PROC",
      PS.LAST_CR_BUFFER_GETS + PS.LAST_CU_BUFFER_GETS BUFFER_GETS,
      PS.LAST_DISK_READS DISK_READS,
      PS.LAST_ELAPSED_TIME TIME
      FROM V$SQL_PLAN_STATISTICS_ALL PS, V$SQL S WHERE
      S.SQL_TEXT LIKE '' AND
      PS.ADDRESS = S.ADDRESS
      ORDER BY PS.ID;
The columns COST and CARDINALITY contain the CBO estimates, while the columns ROWS_PROC, BUFFER_GETS, DISK_READS and TIME (in microseconds) display the actual statistics of the last execution.
    • Deactivating the PLAN statistics:

      ALTER SESSION SET STATISTICS_LEVEL=TYPICAL;
  • PLAN-Statistiken (Oracle >= 10g)
As of Oracle 10g, you can also use the GATHER_PLAN_STATISTICS hint:
    • Execute the SQL statement using the GATHER_PLAN_STATISTICS hint:

      SELECT /*+ GATHER_PLAN_STATISTICS */ ...
    • Indicate the execution plan using the following command:

      SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,
      NULL, 'ALLSTATS'));
The records and the time for each execution step are contained in the columns "A-Rows" and "A-Time".
  • Sample data
In some situations, it is useful to get an impression of the data composition without having to scan the entire table or without having to rely on how representative the first records of a table are. You can use the following command to randomly select data records:

SELECT * FROM SAMPLE ();
19. What information do the "Estimated Costs" and "Estimated Rows" provide for an SQL statement?
An EXPLAIN displays "Estimated Costs" and "Estimated Rows", which are simply the CBO's calculation results (refer to Note 7550631). Since these results are based upon a number of assumptions (column values are distributed equally, statistics), and depend upon the database parameter settings, the calculated costs and rows are useful only within a margin for error. High "Estimated Costs" and "Estimated Rows" are therefore neither a satisfactory nor a necessary indication of an expensive SQL statement. Also, the calculated costs have no actual effect upon the performance - the deciding costs are always the actual ones, in the form of BUFFER GETs, DISK READs, or processed rows.
20. Is it possible to determine a buffer quality for individual tables?
The Oracle Buffer Quality (see Note 618868 for more information) is a global statistic of the database. Since individual SQL statements can strongly affect this global value (both positively and negatively), a table-specific buffer quality is desirable. You can use the following SQL statement (based on V$SEGMENT_STATISTICS) to determine the 20 tables that have the poorest buffer quality (this query only considers tables with at least 10,000 buffer gets):

SELECT * FROM
( SELECT
S1.OBJECT_NAME OBJECT,
S1.VALUE LOGICAL_READS,
S2.VALUE PHYSICAL_READS,
ROUND((S1.VALUE - S2.VALUE) / S1.VALUE * 100, 2) "QUALITY (%)"
FROM
V$SEGMENT_STATISTICS S1, V$SEGMENT_STATISTICS S2
WHERE
S1.OBJ# = S2.OBJ# AND
S1.VALUE > 10000 AND
S1.STATISTIC_NAME = 'logical reads' AND
S2.STATISTIC_NAME = 'physical reads'
ORDER BY 4 ASC
)
WHERE ROWNUM <=20;
21. What happens when there is an overflow of the Buffer Get?
For Oracle Release 9i or lower, if the SQL statement executes more than two billion Buffer Gets, the value overflows in the Shared Cursor Cache in the negative area. For Oracle Release 10g, the limit is four billion. When the system reaches this limit, the count begins at 0 again. In both cases, the result is that the system can no longer immediately recognize a large number of Buffer Gets. In addition, in many cases, the Shared Cursor Cache analysis ignores statements with negative Buffer Gets values, and as a result, the most expensive SQL statements regarding Buffer Gets are not even displayed. To recognize the affected SQL statements despite the overflow, you can select the following options:
  • Check whether there are accesses with a negative number of Buffer Gets:

    SELECT * FROM V$SQL WHERE BUFFER_GETS < 0;
  • Check the plausibility of the data - if the system displays a number of Buffer Gets that is significantly lower than is possible, an overflow may be responsible.
  • When you use Oracle 10g, also consider the AWR data from DBA_HIST_SQLSTAT that was aggregated on the basis of Snapshot, since overflows between two snapshots (generally found in one-hour intervals) are improbable.
22. Can I see a sample analysis?
The following SQL statement guarantees several thousand disk reads and buffer gets per execution, although hardly any entries are returned:

SELECT *
FROM "LTAP" T_00 , "LTAK" T_01
WHERE
( T_01 . "MANDT" = :A0 AND
T_01 . "LGNUM" = T_00 . "LGNUM" AND
T_01 . "TANUM" = T_00 . "TANUM" ) AND
T_00 . "MANDT" = :A1 AND
T_00 . "LGNUM" = :A2 AND
T_00 . "PQUIT" = :A3
The execution path contains a sort merge join:

SELECT STATEMENT ( Estimated Costs = 1,712 , Estimated #Rows = 52,660 )
MERGE JOIN
TABLE ACCESS BY INDEX ROWID LTAP
INDEX RANGE SCAN LTAP~0
SORT JOIN
TABLE ACCESS BY INDEX ROWID LTAK
INDEX RANGE SCAN LTAK~R
A nested loop join is most likely preferable to a sort merge join. Because - leaving aside the unselective MANDT - there are only specifications for LTAP selection conditions in the WHERE part (LGNUM, PQUIT columns), you can check whether LTAP is suitable as an access table for a nested loop join. A suitable index already exists:

NONUNIQUE Index LTAP~M
Column Name #Distinct
MANDT 1
LGNUM 7
PQUIT 2
MATNR 24.280
As you can see in the #Distinct column, neither LGNUM nor PQUIT have many different values. However, this information does not yet allow any direct conclusions to be drawn on the selectivity of the conditions.
Even without an SQL trace, in this case you can already read the PQUIT value ' ' from the ABAP source code:
SELECT LTAP~LGNUM LTAP~TANUM LTAP~TAPOS
LTAP~VLTYP LTAP~NLTYP LTAK~BWLVS
LTAK~BDATU LTAK~BZEIT
INTO CORRESPONDING FIELDS OF TABLE IT_LTAP
FROM LTAP JOIN LTAK
ON LTAK~LGNUM = LTAP~LGNUM AND
LTAK~TANUM = LTAP~TANUM
WHERE LTAP~LGNUM = P_LGNUM
AND LTAP~PQUIT = ' '.
Now you can determine the exact value distribution for LGNUM and PQUIT using a GROUP BY:
SELECT LGNUM, PQUIT, COUNT(*) FROM LTAP GROUP BY LGNUM, PQUIT;
LGN P COUNT(*)
--- - ----------
I01 X 10
I02 5
I02 X 33955
I03 X 3088
P01 164
P01 X 81941
S01 2
S01 X 67807
S02 10
S02 X 3201
W01 33
W01 X 139158
You can tell that only very few entries ever exist for the used condition PQUIT = ' '. The condition is therefore very selective, although PQUIT only assumes two different values. A nested loop join with access through LTAP is therefore far preferable to a sort merge join.
The following options are available to persuade Oracle to use a nested loop join:
  • Specification of a hint (see Note 772497)
  • Sufficiently increase the statistics value for the distinct values of PQUIT, to make access using the LTAP~M index more attractive for Oracle (Note 724545).
One solution within the application is to split the join into two separate statements as described in Note 187935.
As an alternative, you can also use a SUBSTITUTE_LITERALS hint and histograms (refer to Note 811852).
23. Where can I find further information on SQL optimization?
In addition to covering general performance topics, the final section of Note 618868 contains references to books, SAP training and SAP services that have the SQL optimization as a central topic.
相關連線:http://blog.csdn.net/CompassButton/archive/2006/07/31/1004912.aspx

來自 “ ITPUB部落格 ” ,連結:http://blog.itpub.net/756652/viewspace-242455/,如需轉載,請註明出處,否則將追究法律責任。

Oracle SQL optimization-2(zt)
請登入後發表評論 登入
全部評論

相關文章