top of page

Installing Oracle 19C RAC Database



This writing explains installing Oracle High Availability solution Real Application Clusters (RAC) with VMware 15.5 on the Oracle Linux 7.9 operating system. You need 2 servers, 2 ethernet cards in each server and shared disks that can be seen by servers for installing RAC. You may download the needed files here.



Installation has 6 phases:

• Installing the Operating System and Preparing the Servers

• Configuring the Disks with Oracleasm

• Installation Phase 1- Installing Grid

• Installation Phase 2- Installing Oracle Software

• Installation Phase 3- Adding Disk Group with asmca

• Installation Phase 4- Creating Database with DBCA



1- Installing the Operating System and Preparing the Servers


The operating system is installed as we told in our post “Installing Oracle Linux 7.9 on Virtual Server (VMware 15.5)”.

I’ll continue by cloning the server. If you wish, you may continue with the server you’ve installed.


Right click on the server you’ve installed and click “Manage > Clone”.




Click “The current state in the virtual machine”.



Click “Create a full clone”.



Name the clone. In the installation, I name my servers as “data4tech01” and “data4tech02”.



Click “Power on this virtual machine” and boot the server.



Upon booting the server, update the operating system and download the packages needed in the installation.

# yum -y update
# yum install -y oracle-database-preinstall-19c.x86_64
# yum install -y oracleasm-support
# yum install -y dnsmasq*

I’ve cloned my server, thus hostname is the same with the server I’ve cloned. Change the hostname.

# vi /etc/hostname

Disable "Secure Linux".

# vi /etc/selinux/config
SELINUX=disabled

Stop and disable “Firewall” service.

# systemctl stop firewalld.service
# systemctl disable firewalld.service

We’ll add 2 ethernet cards to our server, that’s why we shut down the server.

# shutdown -h now

Click "Edit virtual machine settings".



Click “Add”.



Select “Network Adapter” and click “Next”.



Add 2 ethernet cards in the same way too and adjust both as “host-only”.



Boot the server.

Ethernet cards are added to the server as “ens37” and “ens38”. Click “connect” for both of them.



Manage settings for “ens37”. This card will be used for “public network”.






Manage settings for “ens38”. This card will be used for “private network”.



Reboot the server.

# reboot

Upon booting the server, edit the file /etc/hosts.

# vi /etc/hosts

Since we’ll set up a small scaled system or a test environment, we’ll use “dnsmasq” for the name resolution service. First, enable “dnsmasq” service in order to boot the service automatically when the server is rebooted.

# systemctl enable dnsmasq

Then, add the line below to the end of the file /etc/dnsmasq.conf.

# vi /etc/dnsmasq.conf
local=/localdomain/

In the servers which will do name resolution, enter the server address where you’ve configured “dnsmasq” to the /etc/resoly.conf file.



Since we’ve configured “dnsmasq” on the 1. Node, edit and lock the file as in the picture.

# chattr +i /etc/resolv.conf

Reboot the server.

# reboot

Check the resolution of scan IPs.

# nslookup data4tech-scan


Network configuration is done for the first server and by cloning the first, we’ll create the second server. Then, we’ll add the disks that can be seen jointly by the servers. I’ve created a folder named “ORACLE SHARED FOLDER” in the index where virtual machines are installed, I’ll create the disks here. You may create them wherever you want. I introduce the folder I’ve created as the way the first server can see it.



Click “Next” and continue.


Select the index where you’ve created the folder.



Select “Enable this share”.



You’ve created the folder. Now, clone the machine you’ve configured just as you did at the beginning. (We’ll clone data4tech01. I don’t tell these steps because I’ve told above.)

Upon cloning, boot the server and change the hostname.

# vi /etc/hostname

Edit IP addresses of the server.

Manage the settings for "ens37".



Manage the settings for “ens38”.



Reboot the server.

# reboot

Check if the second server too does name resolution.

# nslookup data4tech-scan

Check the communication between the servers.

In the 1st server:

# ping data4tech02
# ping data4tech02-priv

In the 2nd server:

# ping data4tech01
# ping data4tech01-priv

Add the disks that can be seen jointly by the servers to the folder you’ve created before. Since I’ll create 2 disk groups, I’ll add two 20 GB disks. Shut down both servers.

# shutdown -h now

Click “Edit virtual machine settings”.



Click “Add” and then “Hard Disk”.



Select “SCSI”.



Create a new virtual disk and continue.



Click “Allocate all disk space now” and “Next”.



I create the first disk named “OCR” in the folder I’ve created before.



We’ve created the first disk, let’s add one more disk named “DATA” in that index in the same way.



We’ve added the disks for the first server. Add those disks for the first server too. Click “Edit virtual machine settings”.



Upon clicking “Add”, select “Hard Disk”.



Select “SCSI”.



Click “Use an existing virtual disk”.



Choose the disk you’ve added before and continue by clicking “Finish”.



Do the same for the other disk too.


In RAC, both servers should be able to write disks. When you boot one of the servers, VMware will automatically lock the disks you’ve added and prevent the other server from booting. To refrain from that situation, you need to make some arrangements in the index where virtual servers are in.

Open the files “Data4tech01.vmx” and “Data4tech02.vmx” with notepad++ in the index where virtual servers are installed and add the lines below to the end.

disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
disk.enableuuid = "TRUE"
bios.bootdelay = "8000"

2- Configuring the Disks with Oracleasm


Check the disks you’ve added.

# ls -al /dev/sd*

/dev/sda

/dev/sda1

/dev/sda2

/dev/sdb

/dev/sdc

sdb” and “sdc” are the disks we’ve added. We’ll partition those disks with “fdisk” command. (Do it just on the 1st server)


Partitioning sdb:



Partitioning sdc:



You see the partitions created as “sdb1” and “sdc1”.

# ls /dev/sd*

Configure Oracle ASM Library and stamp the disks for ASM.


Configure Oracleasm.

1st Server:

# oracleasm configure -i 

Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets (‘[]’). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done


Check the parameters.

# oracleasm configure

# oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm Loading module “oracleasm”: oracleasm Configuring “oracleasm” to use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm

# oracleasm createdisk OCR /dev/sdb1

Writing disk header: done

Instantiating disk: done

# oracleasm createdisk DATA /dev/sdc1

Writing disk header: done

Instantiating disk: done

# oracleasm scandisks
# oracleasm listdisks

2nd Server:

# oracleasm configure -i 

Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets (‘[]’). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done

# oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm Loading module “oracleasm”: oracleasm Configuring “oracleasm” to use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm

# oracleasm scandisks
# oracleasm listdisks

3- Installation Phase 1- Installing Grid


Create the folder indexes in both servers. To move the files needed in the installation, I create an index named “/u01/oraInstall”. If you wish, you can move installation files somewhere else.

# mkdir -p /u01/app/19/grid
# mkdir -p /u01/app/oracle/product/19/db_1
# mkdir -p /u01/oraInstall
# chown -R oracle:orainstall /u01
# chmod -R 775 /u01

Change the password of the Oracle user. (on both servers)

# passwd oracle

Switch to Oracle user and create profile files.

1st Server:

# su - oracle
$ vi .profile_crs
$ vi .profile_db
[oracle@data4tech01 ~]$ cat .profile_crs 

ORACLE_SID=+ASM1; export ORACLE_SID

GRID_HOME=/u01/app/19/grid; export GRID_HOME

ORACLE_HOME=$GRID_HOME; export ORACLE_HOME

PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/sbin; export PATH


LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


[oracle@data4tech01 ~]$ cat .profile_db

ORACLE_HOSTNAME=data4tech01.localdomain; export ORACLE_HOSTNAME

ORACLE_SID=orcl1; export ORACLE_SID

ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=/u01/app/oracle/product/19/db_1; export ORACLE_HOME

ORACLE_TERM=xterm; export ORACLE_TERM


TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR


PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/sbin; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH


2nd Server:

# su - oracle
$ vi .profile_crs
$ vi .profile_db
[oracle@data4tech02 ~]$ cat .profile_crs

ORACLE_SID=+ASM2; export ORACLE_SID

GRID_HOME=/u01/app/19/grid; export GRID_HOME

ORACLE_HOME=$GRID_HOME; export ORACLE_HOME

PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/sbin; export PATH


LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHexport CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

[oracle@data4tech02 ~]$ cat .profile_db

ORACLE_HOSTNAME=data4tech02.localdomain; export ORACLE_HOSTNAME

ORACLE_SID=orcl2; export ORACLE_SID

ORACLE_UNQNAME=orcl; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=/u01/app/oracle/product/19/db_1; export ORACLE_HOME

ORACLE_TERM=xterm; export ORACLE_TERM


TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR


PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/sbin; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH


Upon creating the profiles, start chrony service on both servers.

# systemctl enable chronyd
# systemctl restart chronyd

Move the installation files you’ve downloaded to the /u01/oraInstall in the 1st server.

# chown oracle.oinstall /u01/oraInstall/LINUX*
# chmod 777 /u01/oraInstall/LINUX*

Unzip with the Oracle user.

$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19/grid/

Install “cvuqdisk” with root user. (on both servers)

# rpm -Uvh cvuqdisk-1.0.10-1.rpm

Since we unzipped in the first server, “cvuqdisk rpm” isn’t in the second server. That’s why we send it with scp and install.

# scp cvuqdisk-1.0.10-1.rpm root@data4tech02:/tmp

Go to index /tmp in the 2nd server and install.

# cd /tmp
# rpm -Uvh cvuqdisk-1.0.10-1.rpm

Start the installation in the 1st server. Reboot it before starting. Log in with the Oracle user.

$ cd /u01/app/19/grid
$ ./gridSetup.sh

Start the installation by clicking “Configure Oracle Grid Infrastructure for a New Cluster”.


Click “Configure an Oracle Standalone Cluster”.



Choose names for Cluster and Scan. Cluster name shouldn’t be longer than 15 characters, otherwise you encounter fault during installation. (Root.sh step 16)



Add the information of the 2nd server.



Test SSH connectivity, if there’s no connection click “Setup”.



If there isn’t SSH connectivity between the servers, write password of the Oracle user and make the interface create connection.



Define the network configuration. I’ve told you before that we’ll use public network for “ens37” and private network for “ens38”.


Click “Use Oracle Flex ASM for storage”.



Click “No”.



Choose the index, where you’ve defined the disks, with Change Discovery Path. Make the Disk Group Name OCR. Select External in Redundancy.

High Redundancy: Data is stored with 3 different copies. No data loss even 2 groups crash.

Normal Redundancy: Data is stored with 2 different copies. No data loss even the other group crashes.

External Redundancy: Data is stored with only a copy. It’s better if you store systems in normal redundancy disk groups.



Choose the passwords. You can choose a different password, I’ll use the same.



Click “Do not use Intelligent Platform Management Interface (IPMI)”.



Click “Next” since we won’t use EM Cloud Control.



Adjust the groups.



Check if the Oracle Base index is correct.



Click “Next”.



I suggest running the scripts manually but since this is a test environment, we’ll run them automatically.



No problem seen in pre-check, NTP may be ignored. Since we chose the swap size 4 GB when installing the server, we encountered fault. It should have been 8 GB. Click “Next” and start installation.

To increase swap size, you may check our post “Increasing Swap Space in Linux Operating System".



Click “Install” and start the installation.



It asks for permission to run the scripts automatically. Click “Yes” and allow it.



! You may encounter the “PRVG-13606” fault at the end of the installation, it’s about NTP you may ignore it.



This is how installing Grid ends.


Set the .profile_crs you’ve created with the Oracle user.

$ . .profile_crs
$ crsctl stat res -t

--------------------------------------------------------------------------------

Name Target State Server State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE data4tech01 STABLE

ONLINE ONLINE data4tech02 STABLE

ora.chad

ONLINE ONLINE data4tech01 STABLE

ONLINE ONLINE data4tech02 STABLE

ora.net1.network

ONLINE ONLINE data4tech01 STABLE

ONLINE ONLINE data4tech02 STABLE

ora.ons

ONLINE ONLINE data4tech01 STABLE

ONLINE ONLINE data4tech02 STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

1 ONLINE ONLINE data4tech01 STABLE

2 ONLINE ONLINE data4tech02 STABLE

3 ONLINE OFFLINE STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE data4tech01 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE data4tech02 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE data4tech02 STABLE

ora.OCR.dg(ora.asmgroup)

1 OFFLINE OFFLINE STABLE

2 ONLINE ONLINE data4tech02 STABLE

3 OFFLINE OFFLINE STABLE

ora.asm(ora.asmgroup)

1 ONLINE OFFLINE data4tech01 Instance Shutdown,STARTING

2 ONLINE ONLINE data4tech02 Started,STABLE

3 OFFLINE OFFLINE STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

1 ONLINE ONLINE data4tech01 STABLE

2 ONLINE ONLINE data4tech02 STABLE

3 OFFLINE OFFLINE STABLE

ora.cvu

1 ONLINE ONLINE data4tech02 STABLE

ora.data4tech01.vip

1 ONLINE ONLINE data4tech01 STABLE

ora.data4tech02.vip

1 ONLINE ONLINE data4tech02 STABLE

ora.qosmserver

1 ONLINE ONLINE data4tech02 STABLE

ora.scan1.vip

1 ONLINE ONLINE data4tech01 STABLE

ora.scan2.vip

1 ONLINE ONLINE data4tech02 STABLE

ora.scan3.vip

1 ONLINE ONLINE data4tech02 STABLE


4- Installation Phase 2- Installing Oracle Software


Unzip with the Oracle user.

$ unzip LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19/db_1

Start the installation.

$ cd /u01/app/oracle/product/19/db_1
$ ./runInstaller

Click “Software only”.



Click “Oracle Real Application Clusters database installation”.



Be sure that both servers are selected and click “Next”.



Click “Enterprise Edition”.



Be sure that software location is the “/u01/app/oracle/product/19/db_1” index.



Click “Next”.



Write the root password to run the scripts automatically.



Click “Ignore all” when NTP comes again.



Click “Install”.



Approve to run the scripts automatically.



Finish the software installation by clicking “Close”.



4- Installation Phase 3- Adding Disk Group with asmca


We’ll create the disk group where we’ll create the database with the asmca tool.

First, set .profile_crs with the Oracle user.

$ . .profile_crs
$ asmca

As you see, we have one disk group which we created during grid installation.



Right click “Disk Groups” and click “Create”.



I name the disk group as “DATA” where we’ll create the database. You can name it however you want. Add the disk you’ve configured with Oracleasm before. Click “OK”.



We’ve created the disk group. You can view disk groups as in the picture. Click “Exit” and leave the asmca tool.



5- Installation Phase IV- Creating Database with DBCA


Set .profile_db with the Oracle user.

$ . .profile_db
$ dbca

Click “Create a database”.


Click “Advanced Configuration”.


Click “General Purpose or Transaction Processing” and click “Next”.



Be sure that servers are selected and click “Next”.



You need to name the database and adjust sid value. We chose both as ‘orcl’. Don’t click “Create as Container database” and continue.



Select “+DATA” which is a disk group we’ve added.



Don’t enable “Archiving”. I’ll explain it later.



Since we’ll not use “DB Vault” and “Label Security”, click “Next”.



We’ll continue by using ASSM, if you wish you can use something else in accordance with your system. What’s important is that, the value shouldn’t exceed two third of the physical RAM value of the server.




Adjust the character set.



Select “Dedicated server mode”.



You may activate the “HR” schema if you wish. Click “Next”.



Continue because we won’t use cloud control.



Choose the passwords for “SYS” and “SYSTEM”.



Select “Create Database”.



Ignore” swap size and continue.



Click “Finish” and start the installation.



Click “Close” and finish the installation.




Here ends my writing “Installing Oracle 19C RAC Database”.


Hope to see you in new posts,

Take care.

311 views0 comments

©2021, Data4Tech 

bottom of page