Skip to main content

Oracle RAC OLR Internals

Oracle RAC OLR Internals 

With 11gr2 Oracle introduced Oracle Local Registry (OLR), OLR is the OCR’s local counterpart and a new feature introduced with Grid Infrastructure. The OLR file is located in the grid_home/cdata/<hostname>.olr & the location of OLR is stored in /etc/oracle/olr.loc. Each node has its own copy of the file in the Grid Infrastructure software home. The OLR stores important security contexts used by the Oracle High Availability Service early in the start sequence of Clusterware. The information stored in the OLR is needed by the Oracle High Availability Services
daemon (OHASD) to start; this includes data about GPnP wallets, Clusterware configuration, and version information. The information in the OLR and the Grid Plug and Play configuration file are needed to locate the voting disks. If they are stored in ASM, the discovery string in the GPnP profile will
be used by the cluster synchronization daemon to look them up.
In the following post I’ll try to describe the purpose of OLR, why Oracle has to come up with this file, what is stored in this file.
To understand all these questions we need to understand what it contains, hence taking dump of OLR
ocrdump -local -stdout
[SYSTEM.OHASD.RESOURCES.ora!DB11G!db][SYSTEM.OHASD.RESOURCES.ora!DB11G!db.CONFIG]ORATEXT: ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r–,group:oinstall:r-x,user:oracle:rwx~ACTION_FAILURE_TEMPLATE=
~ACTION_SCRIPT=~ACTIVE_PLACEMENT=1~AGENT_FILE~NAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX
~AUTO_START=restore~BASE_TYPE=ora.cluster_resource.type~CARDINALITY=1~CHECK_INTERVAL=1
~CHECK_TIMEOUT=30~CLUSTER_DATABASE=false~DATABASE_TYPE=SINGLE~DB_UNIQUE_NAME=DB11G
~DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE=%DATABASE_TYPE%)~DEGREE=1~DESCRIPTION=Oracle Database resource~ENABLED=1
~FAILOVER_DELAY=0~FAILURE_INTERVAL=60~FAILURE_THRESHOLD=1~GEN_AUDIT_FILE_DEST=/oradata/DB11G/admin/adump~GEN_START_OPTIONS=open~GEN_USR_ORA_INST_NAME=DB11G
~HOSTING_MEMBERS=~INSTANCE_FAILOVER=1~LOAD=1~LOGGING_LEVEL=1~MANAGEMENT_POLICY=AUTOMATIC
~NAME=ora.DB11G.db~NLS_LANG=~NOT_RESTARTING_TEMPLATE=~OFFLINE_CHECK_INTERVAL=0
~ONLINE_RELOCATION_TIMEOUT=0~ORACLE_HOME=/opt/oracle/product/base/11.2.0.3~ORACLE_HOME_OLD=
~PLACEMENT=balanced~PROFILE_CHANGE_TEMPLATE=~RESTART_ATTEMPTS=2~ROLE=PRIMARY~SCRIPT_TIMEOUT=60
~SERVER_POOLS=~SPFILE=/opt/oracle/product/base/11.2.0.3/dbs/spfileDB11G.ora
~START_DEPENDENCIES=weak(type:ora.listener.type,uniform:ora.ons)hard(ora.DB11G_DATA_DG.dg,ora.DB11G_ARCH_DG.dg) pullup(ora.DB11G_DATA_DG.dg,ora.DB11G_ARCH_DG.dg)~START_TIMEOUT=600~STATE_CHANGE_TEMPLATE=
~STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DB11G_DATA_DG.dg,shutdown:ora.DB11G_ARCH_DG.dg
~STOP_TIMEOUT=600~TYPE=ora.database.type~TYPE_ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r–~TYPE_NAME=ora.database.type
~TYPE_VERSION=3.2~UPTIME_THRESHOLD=1h~USR_ORA_DB_NAME=~USR_ORA_DOMAIN=~USR_ORA_ENV=
~USR_ORA_FLAGS=~USR_ORA_INST_NAME=DB11G~USR_ORA_OPEN_MODE=open~USR_ORA_OPI=false
~USR_ORA_STOP_MODE=immediate~VERSION=11.2.0.3.0
I tried to format the output as much as I can, the point here is to understand that there is a lot of information present in OLR like ORA_CRS_HOME, Clusterware version, Clusterware configuration, localhost version, activeversion, GPnP details, OCR latest backup time and location, node name, status of resources of the node as in which to be started and which not, and also the start & stop dependencies of resources etc.
Start & Stop Dependencies are classified as weak (Should fulfill) & hard (Must fulfill).
Now we’ll understand the purpose of this file, we know that OCR needs to be accessible by Clusterware to know which resources to be started on a node, but as from 11gr2, Oracle has given the luxury to store OCR in ASM how does Clusterware accesses this information while ASM (which itself is a resource for the node) is Down, here comes the OLR. Since OLR is an locally available file on operating system there is no dependencies and this file could be read by any process with appropriate privileges.
The High Availability Services stack consists of daemons that communicate with their peers on the other nodes. As soon as the High Availability Services stack is up, the cluster node can join the cluster and use the shared components (e.g., the OCR). The startup sequence of the High Availability Services stack is stored partly in the Grid Plug and Play profile, but that sequence also depends on information stored in the OLR.
Now the question that comes to our mind is if we have OLR then why do we need OCR, to explain this we compared the keys of OLR & OCR.
Comparing the OCR with the OLR reveals that the OLR has far fewer keys; for example, ocrdump reported 704 different keys for the OCR vs. 526 keys for the OLR on our installation. If you compare only the keys again, you will notice that the majority of keys in the OLR deal with the OHASD process, whereas the majority of keys in the OCR deal with the CRSD. This confirms that you need the OLR (along with the GPnP profile) to start the High Availability Services stack.
I hope the above information helps you in understanding OLR, its purpose, its content, its usage and why was it required. Please comment below if you need more information as in the complete dump of OLR, the description of all the components & keys, how OHASD manages and maintains OLR on all the nodes, how is the content updated when you alter any Clusterware configuration of a node, what happens when OLR is lost or corrupted, how is OLR initialized, etc.

Comments

  1. In which frequency the OLR is updated , is it always in sync with OCR

    ReplyDelete

Post a Comment

Popular Posts