Quantcast
Channel: zaheer.appsdba's Groups Activities
Viewing all articles
Browse latest Browse all 74

Oracle Database 12c on ZFS File System

$
0
0

Introduction:

Oracle Database supports multiple file system for hosting the Data files. Oracle ASM is the recommended storage solution from oracle for storing the data files.  Oracle ASM Instance is supported in RAC and as well as in Non-RAC environments. There are many organizations who are not using the ASM as an storage solution due to their own reasons and still they are dependent on the third party file system. 

Oracle ZFS (Zeta Byte File System) is owned by Oracle and on all Solaris operating Environment  ZFS is the only recommended option for managing the Storage devices. In  Oracle Solaris 11 the default file system for "root" is also ZFS and we cannot use any other file system as an option for "root" file system. We can use UFS or any other third party file system for other mount-points/storage devices. In Solaris 10  operating environment there is choice to use ZFS/UFS file system  for  root and other mount-points/storage devices (HDD).

In my observation all Oracle Solaris 11 operating Environments  are using ZFS file system for managing all storage devices. ZFS is a robust file system  and  it has many excellent features like compression,  multi size block support, snap clone etc. In this article we will see the recommended method of using ZFS file system  for storing the Data files. 

Oracle Database Considerations for ZFS File System:

Oracle ZFS file system will use 128K block size by default and Oracle uses default block size of 8K. Its been observed that customers using Oracle database on ZFS file system will use the default record size (128K) which is not recommended and it is against Oracle best practices.  We have to make sure that we are using the same database block size as of ZFS record size.

recordsize = db_block_size

We can create ZFS file systems  with different record sizes.  This we will see in detail in the below demonstration.

Demonstration:

Lets see the generic database block size configuration on the solaris system on which all default parameters has been used.

SQL> show parameter db_block_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer                   8192

SQL> select name, block_size from v$datafile;

NAME BLOCK_SIZE
-------------------------------------------------- ----------
/u01/oradata/PRODDB/system01.dbf                    8192
/u01/oradata/PRODDB/rcat_ts01.dbf                   8192
/u01/oradata/PRODDB/sysaux01.dbf                    8192
/u01/oradata/PRODDB/undotbs01.dbf                   8192
/u01/oradata/PRODDB/users01.dbf                     8192



Lets see the OS block size of the respective database file system;

root@solaris113:~# zfs get recordsize data_pool
NAME PROPERTY VALUE SOURCE
data_pool recordsize 128K default
root@solaris113:~#

The default block size of the operating system is 128K which is not correct as per the oracle documentation. The different block size at DB and OS level works pretty fine but this configuration may cause performance issues in the long run. So its higghly recommended to use the same block size at both database and OS level.

Lets see how we can configure different block sizes for keeping different database files.

Table below illustrates the recommended block sizes to be used for respective datafiles. 

                    Type of Files

Oracle Block size Recommendation

OLTP Datafiles

8K

DSS Datafiles

16K/32K

TEMP Files

128k/1M

REDO,UNDO,Archive Logs

1M

 

Now in this demonstration we will use two disks (LUNS) for storing OLTP/DSS and DB -LOG/UNDO/TEMP Files with different block sizes.

root@solaris113:~# echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 <ATA-VBOX HARDDISK-1.0-50.00GB>
/pci@0,0/pci8086,2829@d/disk@0,0
1. c1t2d0 <ATA-VBOX HARDDISK-1.0-1.00GB>
/pci@0,0/pci8086,2829@d/disk@2,0
2. c1t3d0 <ATA-VBOX HARDDISK-1.0-10.00GB>
/pci@0,0/pci8086,2829@d/disk@3,0
3. c1t4d0 <ATA-VBOX HARDDISK-1.0-25.00GB>
/pci@0,0/pci8086,2829@d/disk@4,0
Specify disk (enter its number): Specify disk (enter its number):
root@solaris113:~#

Disk3 - c1t3d0 will be used for OTLP/DSS datafiles

Disk4 - c1t4d0 will be used for REDO,UNDO,Archive Logs

- Creation of  ZFS Data Pools

We will create two ZFS pools "OLTP_DATA" and "LOG_DATA"


root@solaris113:~# zpool create oltp_data c1t3d0
root@solaris113:~# zpool status oltp_data
pool: oltp_data
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
oltp_data ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0

errors: No known data errors
root@solaris113:~#



root@solaris113:~# zpool create log_data c1t4d0
root@solaris113:~# zpool status log_data
pool: log_data
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
log_data ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0

errors: No known data errors
root@solaris113:~#

Now check the block  sizes of the respective pools.

root@solaris113:~# zfs get recordsize oltp_data
NAME PROPERTY VALUE SOURCE
oltp_data recordsize 128K default
root@solaris113:~#

root@solaris113:~# zfs get recordsize log_data
NAME PROPERTY VALUE SOURCE
log_data recordsize 128K default
root@solaris113:~#

Creation of ZFS data sets with different block sizes:

Create zfs data set with 8K size   in "oltp_data":

root@solaris113:~# zfs create -o recordsize=8k -o mountpoint=/dbdata/datafiles oltp_data/datafiles

root@solaris113:~# zfs list | grep oltp
oltp_data 132K 9.78G 31K /oltp_data
oltp_data/datafiles 31K 9.78G 31K /dbdata/datafiles

root@solaris113:~# zfs get recordsize oltp_data/datafiles
NAME PROPERTY VALUE SOURCE
oltp_data/datafiles recordsize 8K local
root@solaris113:~#

Similarly create the data set with "1M" size in "log_data":

root@solaris113:~# zfs create -o recordsize=1m -o mountpoint=/logdata/redofiles log_data/redofiles
root@solaris113:~# zfs create -o recordsize=1m -o mountpoint=/logdata/temp log_data/temp
root@solaris113:~# zfs create -o recordsize=1m -o mountpoint=/logdata/undofiles log_data/undofiles
root@solaris113:~# zfs list | grep logdata
log_data/redofiles 31K 24.5G 31K /logdata/redofiles
log_data/temp 31K 24.5G 31K /logdata/temp
log_data/undofiles 31K 24.5G 31K /logdata/undofiles

Get the record size for these data sets:

root@solaris113:~# zfs get recordsize log_data/redofiles
NAME PROPERTY VALUE SOURCE
log_data/redofiles recordsize 1M local
root@solaris113:~# zfs get recordsize log_data/temp
NAME PROPERTY VALUE SOURCE
log_data/temp recordsize 1M local
root@solaris113:~# zfs get recordsize log_data/undofiles
NAME PROPERTY VALUE SOURCE
log_data/undofiles recordsize 1M local

Creation of database with datafiles, logfiles, tempfiles on different location corresponding to the supported block sizes using dbca:

- Click on customize  locations in order to change the location of the datafiles, controlfiles and redo logfiles.

By default datafiles are pointing to the "ORACLE_BASE" location, but we have to change the location of undo and tempfile to the data set of 1M size.

- /logdata/undofiles and /logdata/temp data set are of 1M size.

- Similarly we should change the location of the redo logfles.

- Finally Ensure all database files are pointing to the correct data set block size with respect to the recommendation from the above table

Verify the DB and OS block sizes after database creation:

SQL> @db_structue.sql

Control Files Location >>>>

Control Files
------------------------------------------------------------
/dbdata/datafiles/TESTDB/control01.ctl
/dbdata/datafiles/TESTDB/control02.ctl


Redo Log File Locations >>>> 


GROUP# Online REDO Logs
---------- --------------------------------------------------
3 /logdata/redofiles/TESTDB/redo03.log
2 /logdata/redofiles/TESTDB/redo02.log
1 /logdata/redofiles/TESTDB/redo01.log

Data Files Locations >>>>

ID Database Data Files MBYTE Sta TSPACE
----- -------------------------------------------------- ---------- --- -------------------------
3 /dbdata/datafiles/TESTDB/sysaux01.dbf 670 OK SYSAUX
1 /dbdata/datafiles/TESTDB/system01.dbf 770 OK SYSTEM
4 /logdata/undofiles/TESTDB/undotbs01.dbf 55 OK UNDOTBS1
6 /dbdata/datafiles/TESTDB/users01.dbf 5 OK USERS
----------
Total 1500

SQL>

We can verify the block sizes of all listd above data set using command "zfs get recordsize <dataset-name>"

Conclusion:

Oracle highly recommends to match the ZFS record size to the Oracle_db_block_size for getting a better OLTP performance.  Always its a best practice to keep the Redo logfiles, archive logfiles and tempfiles on a seperate data set which will help database with I/O overheads. For Redo logfiles, archive logfiles and tempfiles we should use the ZFS recordsize to the bigger value, in this artcile demonstration we used 1M which performs reasonably good.  Its very simple to configure the ZFS recordsize for a Data Set, but in most of the customer environments I've seen this block sizing has not been considered at all, I would highly recommed to use the suggested block sizing to have a better performance on your database I/O.


Viewing all articles
Browse latest Browse all 74

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>