Friday, May 26, 2017

Ready for some automatic big table caching?




A new feature in 12.1.0.2 is automatic big table caching.  This is pretty cool.  The key parameter for it is db_big_table_cache_percent_target. This more or less sets aside some of the buffer cache to be used for full table scans.  It’s not like the KEEP or RECYCLE pool, those have dedicated object to them.  And it’s not really setting up a set off buffer that can’t be used for anything else.  The idea is that up to this percent of the buffer cache cold be used for big table full table scans.

Now a little history on full table scans in Oracle.  Prior to 11 full table scans (of any table) were always in the buffer cache.  However if the table was of significant size, the Oracle wouldn’t even try to pull in the entire table during the scan.  It would in effect, cycle thru a set of buffers for the whole table.  In 11 Oracle stated to move this work out of the SGA (System Global Area) and into the PGA (Program Global Area).   The basic idea was that these block were highly unlikely to be shared so why even try, let's just do that work in an area for the user.  The same thing basically happen, the table was cycled thru a set of buffers in the PGA space. 

But what about tables that have full scans on them often?  Even if the entire table couldn’t be cached, if a good portion of it could be that would likely be faster and all that.  So in a sorta-kinda-way oracle is going back to the old way of doing full scans, but with a twist.  The twist is decided which tables to keep in the cache.  A new characteristic is now tracked with tables in the buffer cache, temperature. 

Temperature is at the object level not the block level.  Oracle uses the good old LRU (Least Recently Used) algorithm at the block level to decide when to age out blocks.  But with automatic big table caching things change.  Now when full scans happen on a table, the more then happen, the hotter the table becomes and the more likely it will stay in cache.

So here’s a simple test to show it in action.  The two scripts (ahwm.sql and bigtab_cnt.sql) run queries on two different tables that are both big enough to be big tables for my instance.   Also I have a script (hbtcache.sql) to show some stats.  The code for this script will be at the bottom of this post.

SQL>
SQL> alter system set db_big_table_cache_percent_target=40;

System altered.

SQL> SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            0                0            1000

********************************************************
**** Object level automatic big table caching stats ****

no rows selected

SQL>
SQL>
SQL> set termout off
SQL> @ahwm
SQL> set termout on
SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            1           43,787            1000

********************************************************
**** Object level automatic big table caching stats ****

  DATAOBJ# OBJECT_NAME                    POLICY      TEMPERATURE SIZE_IN_BLKS CACHED_IN_MEM
---------- ------------------------------ ---------- ------------ ------------ -------------
     73914 AHWM                           MEM_ONLY          1,000       43,787        43,787

SQL> set termout off
SQL> @bigtab_cnt
SQL> set termout on
SQL> @hbtcache
********************************************************
****** Over all automatic big table caching stats ******

BT_CACHE_TARGET OBJECT_COUNT MEMORY_BUF_ALLOC MIN_CACHED_TEMP
--------------- ------------ ---------------- ---------------
             40            2           86,416            1000

********************************************************
**** Object level automatic big table caching stats ****

  DATAOBJ# OBJECT_NAME                    POLICY      TEMPERATURE SIZE_IN_BLKS CACHED_IN_MEM
---------- ------------------------------ ---------- ------------ ------------ -------------
     73914 AHWM                           MEM_ONLY          1,000       43,787        43,787
     73488 BIG_TAB                        MEM_PART          1,000       43,161        42,629

SQL>

What we see is that at 40% of the cache it was able to put the entire table AHWM in the cache, but couldn’t fit all of BIG_TAB.  But it did put in as much of it as it could.   So latter scans on AHWM will all be in the buffer cache (no physical reads) but on BIG_TAB some physical reads will still happen.

A couple things, it’s not real clear to me at this point how the temperature goes up.  Clearly the more times you access it the hotter it gets, but what 1000 means is not clear and I supposed it’s not critical for us to know how Oracle measures this.  


Also the MIN_CACHED_TEMP doesn’t appear appear to be modifiable.  I suspect that we will be able to modify this at some point. 


Over all this looks like a cool thing, or is it a hot thing?  Hummm…


Code for hbtcache.sql  

rem hbtcache.sql
rem MAY2017 RVD
rem shows stats on automatic big table caching
rem
column memory_buf_alloc format 999,999,999
column object_name      format a30 wrap
column temperature      format 999,999,999
column size_in_blks     format 999,999,999
column cached_in_mem    format 999,999,999
set lines 200
prompt ********************************************************
prompt ****** Over all automatic big table caching stats ******
select bt_cache_target, object_count, memory_buf_alloc, min_cached_temp
from v$bt_scan_cache
/

prompt ********************************************************
prompt **** Object level automatic big table caching stats ****
select dataobj#, object_name, policy, temperature, size_in_blks, cached_in_mem
from v$bt_scan_obj_temps btsot, dba_objects dbaobj
where btsot.dataobj#=dbaobj.object_id(+)
order by temperature
/

Wednesday, May 24, 2017

It’s the end for sec_case_sensitive_logon



I’ve been using sec_case_sensitive_logon set to FALSE for quite a while. I do this because many of the scripts I use have been written over the year and the password in them has not always been well managed so to say.  However in 12.2 this has come to a hard stop.  Oracle has been saying this is being deprecated/unsupported and it sure is now!

If you set sec_case_sensitive_logon to FALSE in 12.2 it effectively makes sets the instance to “restricted logins” the only way to connect is with “AS SYSDBA”.   The weird thing is that the error you get doesn’t lead you do this at all.  So with is set to FALSE I’d get this behavior:

SQL>
SQL> conn op/op
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL>

With it set to TRUE:

SQL>
SQL> conn op/op
Connected.
SQL>

It sure would have been nice for something else to be mentioned about this.  I looked in the alert log and nothing there.   None of the trace files mentioned anything useful.  I search OTN and everywhere else.  Fortunately with some help from my buddies via twitter we were able to figure it out.  Thanks Jeremy!

Edit: 
For those of you who might have it in your SPFILE, to remove it from the file so you don't get the annoying "ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance" at startup:

 SQL> alter system reset sec_case_sensitive_logon scope=spfile;