Saturday, October 4, 2008

Oracle SQL Tuning TIPS

In this blog I am going to write about some basic rules and tips that can enhance query performance by reducing parsing,execution, or both. These rules are simple but may yield lots of benefits.

TIP1.

1. Use SQL standards within an application. Below are the simple rules that are easy to implement and allow more sharing within Oracle’s memory.
a. Using a single case for all SQL verbs
b. Beginning all SQL verbs on a new line
c. Right or left aligning verbs within the initial SQL verb
d. Separating all words with a single space

2. Use bind variables where ever possible.
3. Use a standard approach to table aliases. If two identical SQL statements vary because an identical table has two different aliases, then the SQL is different and will not be shared.
4. Use table aliases and prefix all column names by their aliases when more than one table is involved in a query. This reduces parse time AND prevents future syntax errors if someone adds a column to one of the tables with the same name as a column in another table. (ORA-00918: COLUMN AMBIGUOUSLY DEFINED)

TIP2.

Don’t perform operation on database objects referenced in the WHERE clause. Oracle will ignore the indexes defined on columns. Just for reference see below DO NOT USE/ USE section:

DO NOT USE
SELECT account_name, trans_date, amount
FROM transaction
WHERE SUBSTR(account_name,1,7) = 'CAPITAL';

SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name = NVL ( :acc_name, account_name);

SELECT account_name, trans_date, amount
FROM transaction
WHERE TRUNC (trans_date) = TRUNC (SYSDATE);

SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name account_type = 'AMEXA';

SELECT account_name, trans_date, amount
FROM transaction
WHERE amount + 3000 < not ="">


Instead use the below one:
USE
SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name LIKE 'CAPITAL%';

SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name LIKE NVL ( :acc_name, '%');

SELECT account_name, trans_date, amount
FROM transaction
WHERE trans_date BETWEEN TRUNC (SYSDATE) AND TRUNC (SYSDATE) + .99999;

SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name = 'AMEX' AND account_type = 'A';

SELECT account_name, trans_date, amount
FROM transaction
WHERE amount <> 0;

SELECT account_name, trans_date, amount
FROM transaction
WHERE amount > 0;

TIP3.

Try not to use HAVING clause in the select statements. HAVING clause will filter records only after fetching all rows. Using WHERE clause helps reducing overhead in sorting, summing etc.HAVING clauses should only be used when columns with summary operations applied to them are restricted by the clause. For reference see below example.

DO NOT USE
SELECT region, AVG (loc_size)
FROM location
GROUP BY region
HAVING region != 'SYDNEY'
AND region != 'PERTH';

USE
SELECT region, AVG (loc_size)
FROM location
WHERE region != 'SYDNEY'
AND region != 'PERTH';
GROUP BY region;


TIP4.

Minimize the number of table lookups (subquery blocks) in queries, particularly if statements include subquery SELECTs or multicolumn UPDATEs. Avoid using subqueries when a JOIN will do the job.

Separate Subqueries
SELECT emp_name
FROM emp
WHERE emp_cat = (SELECT MAX (category) FROM emp_categories)
AND emp_range = (SELECT MAX (sal_range)FROM emp_categories)
AND emp_dept = 0020;

Combined Subqueries
SELECT emp_name
FROM emp
WHERE (emp_cat, sal_range) =
(SELECT MAX (category), MAX (sal_range)
FROM emp_categories) AND emp_dept = 0020;

TIP5.

Consider the alternatives EXISTS, IN and table joins when doing multiple table joins. None of these are consistently faster; it depends on the volume of data.
If the outer query is "big" and the inner query is "small", "IN" is generally more efficient. e.g.

select count(subobject_name)
from big
where object_id in ( select object_id from small );

versus:
select count(subobject_name)
from big
where exists ( select null from small where small.object_id = big.object_id );

If the outer query is "small" and the inner query is "big" "WHERE EXISTS" can be quite efficient. e.g.

select count(subobject_name)
from small
where object_id in ( select object_id from big );

versus:
select count(subobject_name)
from small
where exists ( select null from big where small.object_id = big.object_id );

TIP6.

Avoid joins that require the DISTINCT qualifier on the SELECT list in queries which are used to determine information at the owner end of a one-to-many relationship. The DISTINCT operator causes Oracle to fetch all rows satisfying the table join and then sort and filter out duplicate values. EXISTS is a faster alternative, because the Oracle optimizer realizes when the subquery has been satisfied once, there is no need to proceed further and the next matching row can be fetched.

DO NOT USE
SELECT DISTINCT dept_no, dept_name
FROM dept d, emp e
WHERE d.dept_no = e.dept_no;

USE
SELECT dept_no, dept_name
FROM dept d
WHERE EXISTS (SELECT 'X'
FROM emp e
WHERE e.dept_no = d.dept_no);

TIP7.

If possible use UNION ALL instead of UNION. The UNION clause forces all rows returned by each portion of the UNION to be sorted and merged and duplicates to be filtered before the first row is returned. A UNION ALL simply returns all rows including duplicates and does not have to perform any sort, merge or filter. If your tables are mutually exclusive (include no duplicate records), or you don't care if duplicates are returned, the UNION ALL is much more efficient.

TIP8.

Consider using DECODE to avoid having to scan the same rows repetitively or join the same table repetitively.

DO NOT USE
SELECT COUNT(*)
FROM emp
WHERE status = 'Y'
AND emp_name LIKE 'SMITH%';
--
SELECT COUNT(*)
FROM emp
WHERE status = 'N'
AND emp_name LIKE 'SMITH%';


USE
SELECT COUNT(DECODE(status, 'Y', 'X', NULL)) Y_count,
COUNT(DECODE(status, 'N', 'X', NULL)) N_count
FROM emp
WHERE emp_name LIKE 'SMITH%';

TIP9.

If query returns more than 20 percent of the rows in the table, use a full-table scan rather than an index scan.

TIP10.

Never mix data types in Oracle queries, as it will invalidate the index. If the column is numeric, remember not to use quotes (e.g., salary = 50000). For char index columns, always use single quotes (e.g., name = 'NAME').

TIP11.

To turn off an index (only with a cost-based optimizer), concatenate a null string to the index column name (e.g., name') or add zero to a numeric column name (e.g., salary+0). With the rule-based optimizer, this allows manually choose the most selective index to service the query.

TIP12.

Whenever possible, use the UNION statement instead of OR conditions.

TIP13.

Rewrite complex subqueries with temporary tables - Oracle created the global temporary table (GTT) and the SQL WITH operator to help divide-and-conquer complex SQL sub-queries (especially those with with WHERE clause subqueries, SELECT clause scalar subqueries and FROM clause in-line views). Tuning SQL with temporary tables (and materializations in the WITH clause) can result in amazing performance improvements.

TIP14.

Use minus instead of EXISTS subqueries - Using the minus operator instead of NOT IN and NOT EXISTS will result in a faster execution plan.

TIP15.

Use SQL analytic functions - The Oracle analytic functions can do multiple aggregations (e.g. rollup by cube) with a single pass through the tables, making them very fast for reporting SQL.

TIP16.

Re-write NOT IN and NOT EXISTS subqueries as outer joins - In many cases of NOT queries (but ONLY where a column is defined as NULL), we can re-write the uncorrelated subqueries into outer joins with IS NULL tests. Note that this is a non-correlated sub-query, but it could be re-written as an outer join. e.g.

SELECT book_key
FROM book
WHERE book_key NOT IN (SELECT book_key FROM sales);
-
SELECT book_key
FROM book
WHERE NOT EXISTS (SELECT book_key FROM sales);

Above two can be re-written as
SELECT b.book_key
FROM book b, sales s
WHERE b.book_key = s.book_key(+) AND s.book_key IS NULL;

TIP17.

Index NULL values - If we have SQL that frequently tests for NULL, consider creating an index on NULL values. To get around the optimization of SQL queries that choose NULL column values (i.e. where emp_name IS NULL), we can create a function-based index using the null value built-in SQL function to index only on the NULL columns. e.g.

--create an FBI on ename column with NULL values
create index emp_null_ename_idx on emp (nvl(ename,'null'));

analyze index emp_null_ename_idx compute statistics;

Same techniques with NULL numeric values. This syntax replaces NULL values with a zero:

--create an FBI on emp_nbr column with NULL values
create index emp_null_emp_nbr_idx on emp (nvl(ename,o));

analyze index emp_null_ename_idx compute statistics;

Now we can use the index and greatly improve the speed of any queries that require access to the NULL columns. Note that we must make one of two changes:

1- Add a hint to force the index
2- Change the WHERE predicate to match the function

Here is an example of using an index on NULL column values:

-- test the index access (change predicate to use FBI)
select /*+ index(emp_null_ename_idx) */ename
from emp e
where nvl(ename,'null') = 'null';

TIP18.

Achieve faster SQL performance with dbms_stats. To choose the best execution plan for a SQL query, oracle relies on information about the tables and indexes in the query. Execution plan includes, which index to use to retrieve table row, what order in which to join multiple tables together, and which internal join methods to use (Oracle has nested loop joins, hash joins, star joins, and sort merge join methods). These execution plans are computed by the Oracle cost-based SQL optimizer commonly known as the CBO.
Use dbms_stats utility to estimate the stats.

Syntax:

exec dbms_stats.gather_database_stats;
exec dbms_stats.gather_database_stats(estimate_percent => 15);

exec dbms_stats.gather_schema_stats('SCOTT');
exec dbms_stats.gather_schema_stats('SCOTT', estimate_percent => 15);

exec dbms_stats.gather_table_stats('SCOTT', 'EMPLOYEES');
exec dbms_stats.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);

exec dbms_stats.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
exec dbms_stats.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);

exec dbms_stats.gather_schema_stats( -
ownname => 'SCOTT', -
options => 'GATHER AUTO', -
estimate_percent => dbms_stats.auto_sample_size, -
method_opt => 'for all columns size repeat', -
cascade => true, -
degree => 15 -
);

No comments: