Blog

Why DB_BIG_TABLE_CACHE_PERCENT_TARGET was a good solution

Share this article

DB_BIG_TABLE_CACHE_PERCENT_TARGET

During the preparation of my Oracle DBA 12c Certified Master exam, I learned about the DB_BIG_TABLE_CACHE_PERCENT_TARGET parameter in 12c. This parameter allows you to reserve a certain percentage of the buffer cache for storing large tables. Full table scans on large tables are normally NOT stored in the buffer cache (they also use direct reads, bypassing the buffer cache), because it should not be the intention to keep scanning large tables over and over again.

JOIN me

My customer had one of those queries that are hard to tune. It was a join of 4 large tables (8Gb, and 3 x 2Gb) with variations of where conditions (depending on the search criteria entered in the application). SQL Plan management etc is out of scope because each query was built dynamically and different from the others (so no fixed sql_id). The problem was that each of these tables returned a lot of rows, and only after joining with the other table(s) and applying all the where conditions there as well, most rows were eliminated.

This makes HASH JOIN the best solution to join these tables together, but this means full table scans on all 4 tables (nested loops are not good when your first table returns a lot of rows for which you have to go to the other tables). Another solution could be to build a materialized view on the join of these tables, however in my case the queries require real-time data and the base tables are changing very much during the day. So a materialized view with refresh on commit would not be an option; the permanent update of the materialized view would slow down the changes too much.

Oh yes, the problem query I was talking about was a central piece of the application, executed very often, and the customer wanted instant response. Hard to achieve on that 8+2+2+2 Gb dataset (bad idea, developers should think about this in advance…).

Setting the parameter

After discussion with the local DBA’s it turned out that the buffer cache could be increased from 20Gb to 35Gb. So we did, and I configured DB_BIG_TABLE_CACHE_PERCENT_TARGET to 50 (%). So, half of the buffer cache was available for large tables and the rest behaves like before. The ‘other’ activity in the database should still have about 17,5Gb of buffer cache, as before, so our change should not have any negative impact.

My problem query, after running it once, indeed stored my 4 large tables completely in the reserved part of the buffer cache. The first execution (disk reads) took 58 seconds, the second execution (no disk reads, all data came from cache) took 18 seconds (yes, there were still millions of rows to join together, so some cpu load was expected).

Considering that these tables are constantly queried by similar but varying queries, it was a good solution to keep all this data in memory. Also the view V$BT_SCAN_OBJ_TEMPS is useful to monitor this behaviour. Each table has a ‘temperature’ which is a measure for how often it was accessed by queries. This uses a different algorithm than the traditional buffer cache for deciding which tables to keep in memory (i.e. when you have more large tables than provided space in the buffer cache). This is what it looks like:

[us_single_image image=”21266″ align=”center”][us_single_image image=”21266″ align=”center”]

Final thoughts

Usually it’s a bad idea to solve performance problems by just ‘buying more memory’. You should first analyze the data model, rewrite the query or try to use other features to attack the root cause of the bad performance. In my case however it is a third party application with auto generated queries, so I didn’t have much options. Also the tables are queried very often so it makes sense to cache them in memory.

Tags: Blog
On ORA-07445, VPD, materialized views and a new bug
pga_aggregate_limit when changing pga_aggregate_target

You May Also Like