Quantcast
Channel: TechCenter
Viewing all articles
Browse latest Browse all 2350

How to deploy Oracle 12c Release 1 on RHEL6/Oracle Linux 6

$
0
0
Revision 11 posted to Oracle Solutions by DELL-Wes V on 3/11/2014 8:07:25 PM

How to deploy Oracle 12c Release 1 on RHEL6/Oracle Linux 6

Dell PowerEdge Systems Oracle 12c Database on Enterprise Linux 6 x86_64
Getting Started Guide

  

Notes and Cautions

NOTE: A NOTE indicates important information that helps you make better use of
your computer.

CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed.

Contents

 

Overview


This document applies to Oracle Database 12c running on Red Hat Enterprise Linux 6.x AS x86_64 or Oracle Linux 6.x AS x86_64 (RHEL compatible kernel).

***Thedeploymenttar'sdiscussedinthisdeploymentguide,willbemadeavailablebyOctober31st2013***

[top]

Software and Hardware Requirements


Hardware Requirements

  • Oracle requires at least 4 gigabytes (GB) of physical memory.
  • Swap space is proportional to the amount of RAM allocated to the system.

RAM

Swap Space

Between 1 GB and 2 GB

1.5 times the size of RAM

Between 2 GB and 16 GB

Equal to the size of RAM

More than 16 GB

16GB

 

  • Oracle's temporary space (/tmp) must be at least 1 GB in size.
  • A monitor that supports resolution of 1024 x 768 to correctly display the Oracle Universal Installer (OUI)
  • For Dell supported hardware configurations, see the SoftwareDeliverableList(SDL)Tested&ValidatedMatrix for each Dell Validated Component at Dell's Current Release Page.

The following table describes the disk space required for an Oracle installation:

Table 1-1

Minimum Disk Space Requirements

Software Installation Location

Size Required

Grid Infrastructure home
Oracle Database home

At least 8 GB space 
5.8 GB of space

Shared storage disk space

Sizes of database and Flashback Recovery Area

[top]

Network Requirements

  • It is recommended that you ensure each node contains at least three network interface cards (NICs). One NIC for public network, OneNICforASMnetworkand two NICs for private network to ensure high availability of the Oracle RAC cluster.Ifyouaregoingtouse  ASM inthe cluster,youneedatleastoneOracleASMnetwork. TheASMnetworkcansharethenetworkinterfacewithaprivatenetwork.
  • Public, private and ASM interface names must be the same on all nodes. For example, if em1 is used as the public interface on node one, all other nodes require em1 as the public interface.
  • All public interfaces for each node should be able to communicate with all nodes within the cluster.
  • All private and ASM interfaces for each node should be able to communicate with all nodes within the cluster.
  • The hostname of each node must follow the RFC 952 standard (www.ietf.org/rfc/rfc952.txt). Hostnames that include an underscore ("_") are not permitted.
  • Each node in the cluster requires the following IP address:
  • One public IP address
  • Three private IP address (two IPs for private network and third for ASM network)
  • One virtual IP address
  • Three single client access name (SCAN) addresses for the cluster

[top]

Operating System Requirements

  • Red Hat Enterprise Linux 6.x AS x86_64 (Kernel 2.6.32-71 or higher)
  • Oracle Linux 6.x AS x86_64 (Kernel 2.6.32-71 or higher)

[top]

Preparing Nodes for Oracle Installation

Attaching to RHN/ULN Repository

NOTE: The documentation provided below discusses how to setup a local yum repository using your operating system installation media. If you would like to connect to the RHN/ULN channels, see the appropriate documentation. For Red Hat see, redhat.com/red_hat_network. For information relating to ULN network see, linux.oracle.com.

The recommended configuration is to serve the files over http using an Apache server (package name: httpd). This section discusses hosting the repository files from a local filesystem storage. While other options to host repository files exist, they are outside of the scope of this document. It is highly recommended to use local filesystem storage for speed and simplicity of maintenance.

1.     One of the requirements is to have the DVD image mounted either by physical media or by ISO image.

  1. 1.      
    1. a.     To mount the DVD, insert the DVD into the server and it should auto-mount into the /media directory.
    2. b.     To mount an ISO image we will need to run the following command as root, substituting the path name of your ISO image for the field myISO.iso:
      mkdir /media/myISO
      mount -o loop myISO.iso /media/myISO

2.     To install and configure the http daemon, configure the machine that will host the repository for all other machines to use the DVD image locally. Create the file /etc/yum.repos.d/local.repo and enter the following:

[local]
name=Local Repository
baseurl=file:///media/myISO/Server
gpgcheck=0
enabled=0

3.     Now we will install the Apache service daemon with the following command which will also temporarily enable the local repository for dependency resolution:

yum -y install httpd --enablerepo=local

After the Apache service daemon is installed, start the service and set it to start up for us next time we reboot. Run the following commands as root:

  • service httpd start
  • chkconfig httpd on


To use Apache to serve out the repository, copy the contents of the DVD
into a published web directory. Run the following commands as root (make sure to switch myISO with the name of your ISO)command:

  • mkdir /var/www/html/myISO
  • cp -R /media/myISO/* /var/www/html/myISO

NOTE: The command createrepo is often used for creating custom repositories, but it is not required as the DVD already holds the repository information.

  • This step is only necessary if you are running SELinux on the server that hosts the repository. The following command should be run as root and will restore the appropriate SELinux context to the copied files: restorecon -Rvv /var/www/html/.
  • The final step is to gather the DNS name or IP of the server that is hosting the repository. The DNS name or IP of the hosting server will be used to configure your yum repository repo file on the client server.

    The following is the listing of an example configuration using the RHEL 6.x Server media and is held in the configuration file
    • §  /etc/yum.repos.d/myRepo.repo
    • §  /etc/yum.repos.d/myRepo.repo

      [myRepo]

      name=RHEL6.4
      name=RHEL6.5 DVD 
      baseurl= http://reposerver.mydomain.com/RHEL6_4/Serverhttp://reposerver.mydomain.com/RHEL6_5/Server
      enabled=1 
      gpgcheck=0


    NOTE: Replace reposerver.mydomain.com with your server's DNS name or IP address.

    NOTE: You can also place the configuration file on the server hosting the repository for all other servers such that it can also use the repository as a more permanent solution to what was done in step 2 .

    [top]

     Installing

     Installing the Dell-Oracle-RDBMS-Server-12c-Preinstall RPM

    TheDellOraclePreinstallrpmisdesignedtodothefollowing:

    • Disabletransparent_hugepagesingrub.conf
    • Disablenumaingrub.conf
    • CreateOracleuserandgroupsoinstall&dba
    • Setsysctlkernelparameters
    • Setuserlimits(nofile,nproc,stack)forOracleuser

    Once your nodes have attached to the appropriate yum repository, we will need to install the Dell-Oracle-RDBMS-Server-12c-Preinstall RPM package. The Dell-Oracle-RDBMS-Server-12c-Preinstall RPM package automates certain pieces of the installation process required for the installation of Oracle RAC or Oracle single instance.

    The process to install the Dell-Oracle-RDBMS-Server-12c-Preinstall RPM package is as follows:

    1.     Download the latest Dell Oracle Deployment tar file from
    http://en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/4957.oracle-database-deployment-automation.aspx
    en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/4957.oracle-database-deployment-automation.aspx

     


    NOTE: The filename will follow the convention: Dell-Oracle-Deployment-OS-version-year-month.tar, for example: Dell-Oracle-Deployment-Lin-EL6-2013-10.tar  <filenamewillbeupdatedlater>Dell-Oracle-Deployment--EL6-12cR1-2014-2-2.tar 

    2.     Copy the Dell Oracle Deployment tar file to a working directory of all your cluster nodes.

    3.     To go to your working directory, enter the following command:
    # cd </working/directory/path>

    4.     Untar the Dell-Oracle-Deployment release using the command:
    # tar zxvf Dell-Oracle-Deployment-o-v-y-m.tar.gzDell-Oracle-Deployment-o-v-y.m-s.tar.gz

    NOTE: Where, o is the operating system, v is the operatingsystemdatabase version, y is the year, and m is the month of the tar release,andsisthesub-version.

    5.     Change directory to Dell-Oracle-Deployment-o-v-y-mDell-Oracle-Deployment-o-v-y.m-s

    6.     Install the Dell-Oracle-RDBMS-Server-12c-Preinstall RPM package on all your cluster nodes using the following command:
    # yum localinstall * --nogpgcheck 
    <filenamewillbeupdatedlater>  dell-oracle-rdbms-server-12cR1-preinstall-1.0-12.el6.x86_64.rpm


    [top]

    Installing the Dell Oracle Utilities RPM

    The Dell Oracle utilities RPM is designed to do the following Dell and Oracle recommended settings:

    • Create Grid Infrastructure directories, set ownership, and permissions.
    • Create grid user.
    • Create Oracle Database (RDBMS) directories, set ownership, and permissions.
    • Create the Oracle base directories, set ownership, and permissions.
    • Set pam limits within (/etc/pam.d/login).
    • Setup /etc/profile.
    • Set SELinuxtoDisabled.
    • InstalltheDellPowerEdgesystemcomponentdriversifapplicable.
    • Setkernelparameters.
    • Setuserlimits(nofile, nproc,stack) for grid userwithin(/etc/security/limits.conf)

    The process to install the Dell Oracle utilities RPM is as follows:

    1.     Download the latest Dell Oracle Deployment tar file from
    http://en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/4957.oracle-database-deployment-automation.aspx


    NOTE: Thefilenamewillfollowtheconvention:Dell-Oracle-Deployment-OSversion-year-month.tar.gz,forexample:<filenamewillbeupdatedlater>
    en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/4957.oracle-database-deployment-automation.aspx



    2.     Copy the Dell Oracle Deployment tar file to a working directory of all your cluster nodes.

    3.     Go to your working directory via the command:
    # cd </working/directory/path>

    4.     Untar the Dell-Oracle-Deployment release using the command:
    # tar zxvf Dell-Oracle-Deployment-o-y-m.tar.gz

    NOTE: Where, o is the operating system version, y is the year, and m is the month of the tar release.

    5.     Change directory to Dell-Oracle-Deployment-o-y-m

    6.     Install the Dell oracle utilities RPM package on all your cluster nodes the by typing:
    # yum locali*localnstall --nogpgcheck
    <filenamewillbeupdatedlater>dell-oracle-utilities-2014.02-2.el6.noarch.rpm

    7.     Once the rpm is installed, run the dodeploy script to setup the environment as follows: # dodeploy -g -r 12cR1  <commandnamewillbeupdatedlater>  
    For more information about the Dell oracle utilities RPM and its options, check the man pages using the command: # man 8 dodeploy

    NOTE: TheDell-Oracle-DeplyomenttarcontainsthelatestsupporteddriversprovidedfromourSoftwareDeliverableList(SDL).ConsulttheREADMEfilefoundwithintheDell-Oracle-Deploymenttarforinstallationinstructionsofthelatestdrivers.





    [top]

    Oracle Software Binary Location

    The Oracle software binaries should be located on node 1 of your cluster. It is important to note that Oracle 12c (12.1.0.1.0) database patch sets are full installation of the Oracle software. 
    [top]

    Setting up the Network

    Public Network

    NOTE: Ensure that the public IP address is a valid and routable IP address.

    NOTE: Ensure to disable NetworkManager via commands:

    service NetworkManager stop
    chkconfig NetworkManager off

    To configure the public network on each node:

    1.     Log in as root.

    2.     Edit the network device file /etc/sysconfig/network-scripts/ifcfg-em#
    where # is the number of the network device:

    NOTE: Ensure that the Gateway address is configured for the public network interface. If the Gateway address is not configured, the Oracle Grid installation may fail.


    DEVICE=em1
    ONBOOT=yes 
    NM_CONTROLLED=no
    IPADDR=<Public IP Address>
    NETMASK=<Subnet mask>
    BOOTPROTO=static
    HWADDR=<MAC Address>
    SLAVE=no
    GATEWAY=<Gateway Address>

    3.     Edit the /etc/sysconfig/network file, and, if necessary, replace localhost.localdomain with the qualified public node name. For example, the command for node 1:hostname=node1.domain.com

    4.     Type service network restart to restart the network service.

    5.     Type ifconfig to verify that the IP addresses are set correctly.

    6.     To check your network configuration, ping each public IP address from a
    client on the LAN that is not a part of the cluster.

    7.     Connect to each node to verify that the public network is functioning. Type ssh <public IP>to verify that the secure shell (ssh) command is working.

    [top]


    Private Network

    NOTE: Each of the two NIC ports for the private network must be on separate PCI buses.

    **The grid infrastructure of Oracle 12c supports IP failover natively known as Redundant Interconnect. Oracle uses its ora.cluster_interconnect.haip resource to communicate with Oracle RAC, and other related services. The Highly Available Internet Protocol (HAIP) has the ability to activate a maximum of four private interconnect connections. These private network adapters can be configured during the initial install process of Oracle Grid or after the installation process using the oifcfg utility.

    Oracle Grid currently creates an alias IP (as known as virtual private IP) on your private network adapters using the 169.254.*.* subnet for the HAIP. If subnet range is already in use, Oracle Grid will not attempt to use it. The purpose of HAIP is to load balance across all active interconnect interfaces, and failover to other available interfaces if one of the existing private adapters becomes unresponsive.

    NOTE:

    • Configure different subnet for each of the private network you want to configure as a part of HAIP
    • When adding additional HAIP addresses (maximum of four) after the installation of Oracle Grid, restart of your Oracle Grid environment to make these new HAIP addresses active.

     The example below provides step-by-step instructions on enabling redundant interconnect using HAIP on a fresh Oracle 12c Grid Infrastructure installation.

     

    1.     Edit the file, /etc/sysconfig/network-scripts/ifcfg-emX, where X is the number of the em device, ifcfg-emX configuration files of the network adapters to be used for your private interconnect. The following example shows em2 using  192.168.1.* subnet and em3 using 192.168.2.* .

    DEVICE=em2
    BOOTPROTO=static
    HWADDR=<HW_ADDR>
    ONBOOT=yes 
    NM_CONTROLLED=no
    IPADDR=192.168.1.140
    NETMASK=255.255.255.0

    DEVICE=em3
    HWADDR=<HW_ADDR>
    BOOTPROTO=static
    ONBOOT=yes 
    NM_CONTROLLED=no
    IPADDR=192.168.2.140
    NETMASK=255.255.255.0

    2.     Once you have saved both the configuration files, restart your network service using service network restart. The completion of the steps above have now prepared your system to enable HAIP using the Oracle Grid Infrastructure installer. When you have completed all the Oracle prerequisites and are ready to install Oracle, you will need to select em2 and em3 as 'private' interfaces at the 'Network Interface Usage' screen.

    3.     This step enables redundant interconnectivity once your Oracle Grid Infrastructure has successfully completed and is running.

    4.     To verify that your redundant interconnect using HAIP is running, you can test this feature using the ifconfig command. An example of the output is listed below.

    ifconfig

    em2       Link encap:Ethernet  HWaddr 00:24:E8:6B:E9:5A

              inet addr:192.168.11.52  Bcast:192.168.11.255  Mask:255.255.255.0

              inet6 addr: fe80::224:e8ff:fe6b:e95a/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:14140 errors:0 dropped:0 overruns:0 frame:0

              TX packets:839 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:2251664 (2.1 MiB)  TX bytes:56379 (55.0 KiB)

              Interrupt:48 Memory:d8000000-d8012800

    em2:1     Link encap:Ethernet HWaddr 00:24:E8:6B:E9:5A
              inet addr:169.254.167.163 Bcast:169.254.255.255                 Mask:255.255.0.0
              UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 

     

    em3       Link encap:Ethernet  HWaddr 00:24:E8:6B:E9:5C

              inet addr:192.168.11.53  Bcast:192.168.11.255  Mask:255.255.255.0

              inet6 addr: fe80::224:e8ff:fe6b:e95c/64 Scope:Link

              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:12768 errors:0 dropped:0 overruns:0 frame:0

              TX packets:2211 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:1000

              RX bytes:2163856 (2.0 MiB)  TX bytes:144187 (140.8 KiB)

              Interrupt:32 Memory:da000000-da012800

    em3:1     Link encap:Ethernet HWaddr 00:24:E8:6B:E9:5C
              inet addr:169.254.167.163 Bcast:169.254.255.255                 Mask:255.255.0.0
              UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1  

     lo        Link encap:Local Loopback

     

              inet addr:127.0.0.1  Mask:255.0.0.0

              inet6 addr: ::1/128 Scope:Host

              UP LOOPBACK RUNNING  MTU:16436  Metric:1

              RX packets:28662 errors:0 dropped:0 overruns:0 frame:0

              TX packets:28662 errors:0 dropped:0 overruns:0 carrier:0

              collisions:0 txqueuelen:0

              RX bytes:1604049 (1.5 MiB)  TX bytes:1604049 (1.5 MiB)



    For more information on Redundant Interconnect and ora.cluster_interconnect.haip, see metalink note: 1210883.1.

    [top]

     

    Oracle Flex ASM Network

    Oracle Flex ASM can use either the same private networks as Oracle Clusterware, or use its own dedicated private networks. Each network can be classified PUBLIC or PRIVATE+ASM or PRIVATE or ASM.

    IP Address and Name Resolution Requirements

    ThestepsbelowshowhowtosetupyourclusternodesforusingDomainNameSystem(DNS).ForinformationonhowtosetupclusternodesusingGNS,seethewikiarticle http://en.community.dell.com/dell-groups/enterprise_solutions/w/oracle_solutions/1416.aspx

    For a Cluster using GNS

    To set up an Oracle 12c RAC using Oracle GNS 

    • Create a static IP address for the GNS VIP address.
    • A Domain Naming Server (DNS) running in the network for the address resolution of the GNS virtual IP address and hostname.
    • The DNS entry to configure the GNS sub-domain delegation
    • A DHCP server running on the same public network as your Oracle RAC cluster. 

    Table 1 describes the different interfaces, IP address settings, and the resolutions in a cluster. 

    Table 1. Interface Requirements

     

    Interface

    Type

    Resolution

    Public

    Static

    DNS

    Private

    Static

    Not required

    ASM

    Static

    Not required

    Node Virtual IP

    DHCP

    GNS

    GNS virtual IP

    Static

    DNS

    SCAN virtual IP

    DHCP

    GNS

     

     

    Configuring the DNS Server to support GNS

     

    To configure changes on a DNS server for an Oracle 12cR1 cluster using a GNS:

     

    1.    Configure GNS VIP address on DNS server—In the DNS, create a name resolution entry for the GNS virtual IP address in the forward lookup file.

    For example: gns-server IN A 155.168.1.2

    where gns-server is the GNS virtual IP address given during Oracle Grid installation. The address that you provide must be routable and should be in public IP address range.

    2.    Configure the GNS sub-domain delegation—In the DNS, create an entry to establish DNS Lookup that directs the DNS resolution of a GNS subdomain to the cluster.

    Add the following to the DNS lookup file:

    clusterdomain.example.com. NS gns-server.example.com.,

    where clusterdomain.example.com. is the GNS sub domain (provided during the Oracle Grid installation) that you delegate and gns-server.clustername.com. resolves to the GNS virtual IP address.

     

    Configuring a DNS Client

     

    To configure the changes required on the cluster nodes for name resolution:

    1.    You must configure the resolv.conf on the nodes in the cluster to contain name server entries that are resolvable to DNS server.

    For example, edit the /etc/resolv.conf file as:

    search ns1.example.com

    nameserver   155.168.1.1

    nameserver   155.168.1.2

     

    NOTE: The total time-out period which is a combination of options attempted and options timed out should be less than 30 seconds. 

    Where 155.168.1.1 is the valid DNS server address and 155.168.1.2 is the GNS Virtual IP address in your network, and ns1.example.com is the domain server in your network

     

    2.    Verify the order configuration. /etc/nsswitch.conf controls the name service order. In some configurations, the NIS can cause issues with Oracle SCAN address resolution. It is recommended that you place the NIS entry at the end of the search list.

    For example, hosts: dns files nis

     Once the appropriate changes have been made run the following command to restart the service:

    /sbin/service nscd restart

     

    Preparing Shared Storage for Oracle RAC Installation

    NOTE: In this section, the terms disk(s), volume(s), virtual disk(s), LUN(s) mean the same and are used interchangeably, unless specified otherwise. Similarly, the terms Stripe Element Size and Segment Size both can be used interchangeably.

    Oracle RAC requires shared LUNs for storing your Oracle Cluster Registry (OCR), voting disks, Oracle Database files, and Flash Recovery Area (FRA). To ensure high availability for Oracle RAC it is recommended that you have:

    • Three shared volumes/ LUNs each of 4 GB in size for normal redundancy or five volumes/LUNs for high redundancy for the Oracle clusterware.
    • Two shared volumes to store your database for normal redundancy or three volumes/LUNs for high redundancy.
    • Two shared volumes/LUNs to store your FRA for normal redundancy or three volumes/LUNs for high redundancy. . Ideally, the FRA space should be large enough to copy all of your Oracle datafiles and incremental backups. For more information on optimally sizing your FRA, see My Oracle Support ID on "What should be the size of Flash Recovery Area?"

    NOTE: The use of device mapper multipath is recommended for optimal performance and persistent name binding across nodes within the cluster.
    NOTE: For more information on attaching shared LUNs/volumes, see the Wiki documentation found at: 
    http://en.community.dell.com/dell-groups/enterprise_solutions/w/oracle_solutions/3-storage.aspx

    [top] 

    Setting up Device Mapper Multipath

    The purpose of Device Mapper Multipath is to enable multiple I/O paths to improve performance and provide consistent naming. Multipathing accomplishes this by combining your I/O paths into one device mapper path and properly load balancing the I/O. This section will provide the best practices on how to setup your device mapper multipathing within your Dell PowerEdge server.Verify that your device-mapper and multipath driver are at least the version shown below or higher:

    1.     rpm -qa | grep device-mapper-multipath

    device-mapper-multipath-0.4.9-64.el6.x86_64

    2.     Identify your local disks i.e. /dev/sda. Once your local disk is determined run the following command to obtain its scsi_id: 
     scsi_id --page=0x83 --whitelisted --device=/dev/sda
     360026b900061855e000007a54ea53534
     

    3.     Open the /etc/multipath.conf file and locate and comment out the section below:
    #blacklist {
    # devnode “*”
    #}
     

    4.     Once the scsi_id of your local disk has been retrieved, you must blacklist this scsi_id from being used as a multipath device within /etc/multipath.conf file and locate,uncomment,and modify the section below:
    blacklist {

                    wwid <enter your local disk scsi_id here>
                    devnode "^(ram|raw|loop|fd|md|dm-|sr||scd|st)[0-9]*"
                    devnode "^hd[a-z]"
    }
     

    5.     Uncomment your defaults section within your /etc/multipath.conf:

    defaults {
    udev_dir /dev
    polling_interval 10
    selector "round-robin 0"
    path_grouping_policy multibus
    getuid_callout "/sbin/scsi_id -g -u -s/block/%n"
    prio_callout /bin/true
    path_checker readsector0
    rr_min_io 100
    max_fds 8192
    rr_weight priorities
    failback immediate
    no_path_retry fail
    user_friendly_names yes
    }
     

    Note:ForPS6110EQLStorageArraythepathcheckerattributeneedstobesetto‘tur’

     HowtoconfiguremultipathforEqualLogicStorageArrayPS6110

     

    6.     Locate the multipaths section within your /etc/multipath.conf file. In this section you will provide the scsi_id of each LUN/volume and provide an alias in order to keep a consistent naming convention across all of your nodes. An example is shown below:
    multipaths {
               multipath {
                                wwid <scsi_id of volume1>
                                alias alias_of_volume1
               }
               multipath {
                                wwid <scsi_id of volume2>
                                alias alias_of_volume2
               }
    }
     

    7.     Restart your multipath daemon service using:
    service multipathd restart

    8.      Verify that your multipath volumes alias are displayed properly:
    multipath -ll 

    9.     Make sure iSCSI service starts upon boot using the command:
    chkconfig multipathd on 

    10.   Repeat steps 1-9 for all nodes.  

    [top]

    Partitioning the Shared Disk

    This section describes how to use Linux’s native partition utility fdisk to create  a single partition on a volume/virtual disk that spans the entire disk.

    To use the fdisk utility to create a partition:

    1. 1.     1.     At the command prompt, type one of the following:
    • o    #> fdisk –cu /dev/<block_device>
    • o    #> fdisk –cu /dev/mapper/<multipath_disk>

    Where, <block_device> is the name of the block device that you are
    creating a partition on. For example, if the block device is /dev/sdb
    ,dev/sdb, type: fdisk –cu /dev/sdb

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

    Building a new DOS disklabel with disk identifier 0x89058e03.

    Changes will remain in memory only, until you decide to write them.

    After that, of course, the previous content won't be recoverable.

     

    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

     

    The device presents a logical sector size that is smaller than

    the physical sector size. Aligning to a physical sector (or optimal

    I/O) size boundary is recommended, or performance may be impacted.

    • o     
    1. 1.     d.     Command (m for help): n # To create a new partition
    2. 2.     e.     Command actione extendedp primary partition (1-4):
      P # To create a primary partition
    3. 3.     f.      Partition number (1-4): 1
    4. 4.     g.     First sector (4096-xxxxxx, default 4096): 
    5. 5.     2.     Repeat step 1 for all the disks.
    6. 6.     3.     Type the following to re-read the partition table and to be able to see the newly created partition(s)

      #> partprobe 
      Or 
      #> service multipathd restart 
      Or 
      #> kpartx –a /dev/mapper/<multipath_disk>
       
    7. 7.     4.     Reboot the system if your newly created  partition is not
      displayed properly.

      [top]

    Identifying ASM Disks to Setup Udev Rules **Applies to RHEL 6.x and Oracle Linux 6 (if not using ASMlib)**

     

    Red Hat Enterprise Linux 6.x/Oracle Linux 6 have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 20- and append .rules to the end of the filename. An example file name is 20-dell_oracle.rules

    In order to set udev rules, one must capture the multipath volumes alias of each disk to be used within your ASM device using the multipath -ll command.

    The command is as follows:

    [root@rhel6.x ~]#multipath –ll,

    the command lists all the volume aliased present in the node. The sample output looks like the below

    Once the multipath volumes alias have been captured, create a file within the /etc/udev/rules.d/ directory and name it 20-dell_oracle.rules. A separate KERNEL entry must exist for each storage device.

     An example of what needs to be placed in the /etc/udev/rules.d/20-dell_oracle.rules file

     

    #------------------------ start udev rule contents ------------------#

    KERNEL=="dm-*", ENV{DM_NAME}=="OCRp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660""0660" 
    KERNEL=="dm-*", ENV{DM_NAME}=="DATAp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

    KERNEL=="dm-*", ENV{DM_NAME}=="FRAp?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

     #-------------------------- end udev rule contents ------------------#

     

    As you can see from the above, the KERNEL command looks at all dm devices, searches for the devices whose names match against the multipath volume aliases, if the DM_NAME of the device is matched with alias, appropriately assign the grid user as the OWNER and the asmadmin group as the GROUP.

     

    [top]

    Installing and Configuring ASMLib **Applies ONLY to Oracle Linux 6.x and if not setting udev rules**

    1. 1.     1.     Use ULN and OTN to download the following files:
    • o    oracleasm-support
    • o    oracleasmlib
    • o    kmod-oracleasm

      NOTE: If your current OS distribution is Oracle Linux, you can obtain the software from the Unbreakable Linux Network using ULN.

      NOTE: Download the latest versions of oracleasm-support and kmod-oracleasm from ULN.

    NOTE: Download the latest versions of oracleasmlib but the version of oracleasm must match the current kernel used in your system. Check this information issuing the command uname -r. check the below link for oracleasmlib downloads:

     http://www.oracle.com/technetwork/server-storage/linux/asmlib/ol6-1709075.html

     

    2.     Enter the following command as root:
    rpm -Uvh oracleasm-support-* \
    oracleasmlib-* \

    kmod-oracleasm-* \
    oracleasm-$(uname -r)-*

    3.     NOTE: Replace * by the correct version numbers of the packages or you can leave them in place of the command ensuring that there are no multiple versions of the packages in the shell's current working directory.

    [top]

    Using ASMLib to Mark the Shared Disks as Candidate Disks **Applies ONLY to Oracle Linux 6 and if not setting udev rules**

    1.     To configure ASM use the init script that comes with the oracleasmsupport package. The recommended method is to run the following
    command as root:
    # /usr/sbin/oracleasm configure -i

    NOTE: Oracle recommends using the oracleasm command found under /usr/sbin
    .usr/sbin. The /etc/init.d path has not been deprecated, but the oracleasm binary provided by Oracle in this path is used for internal purposes.

    Default user to own the driver interface []: grid
    Default group to own the driver interface []: asmadmin
    Start Oracle ASM library driver on boot (y/n) [ n ]: y
    Fix permissions of Oracle ASM disks on boot (y/n) [ y ]: y

    NOTE: In this setup the default user is set to grid and the default group is set to asmadmin. Ensure that the oracle user is part of the asmadmin group. You can do so by using the dell-validated and dell-oracle-utilities rpms.

    The boot time parameters of the Oracle ASM library are configured and a sequential text interface configuration method is displayed.

    2.     Set the ORACLEASM_SCANORDER parameter in
    /etc/sysconfig/oracleasm

    NOTE: When setting the ORACELASM_SCANORDER to a value, specify a common string associated with your device mapper pseudo device name. For example, if all the device mapper device had a prefix string of the word "asm", (
    /dev/mapper/asm-ocr1,/dev/mapper/asm-ocr1, /dev/mapper/asm-ocr2),dev/mapper/asm-ocr2), populate the ORACLEASM_SCANORDER parameter as: ORACLEASM_SCANORDER="dm"."dm". This would ensure that oracleasm will scan these disks first.

    3.     Set the ORACLEASM_SCANEXCLUDE parameter in /etc/sysconfig/oracleasm to exclude non-multipath devices.

    For example: ORACLEASM_SCANEXCLUDE=<disks to exclude>

    NOTE: If we wanted to ensure to exclude our single path disks within /dev/ such as sda and sdb, our ORACLEASM_SCANEXCLUDE string would look like: ORACLEASM_SCANEXCLUDE="sda sdb"

    setthebelowtotrue,bydefaultitsfalse
    ORACLEASM_USE_LOGICAL_BLOCK_SIZE=true 

     

    4.     To create ASM disks that can be managed and used for Oracle database installation, run the following command as root:

    /usr/sbin/oracleasm createdisk DISKNAME /dev/mapper/diskpartition

    NOTE: The fields DISKNAME and /dev/mapper/diskpartition should be substituted with the appropriate names for your environment respectively.

    NOTE: It is highly recommended to have all of your Oracle related disks to be within Oracle ASM. This includes your OCR disks, voting disks, database disks, and flashback recovery disks.

    5.     Verify the presence of the disks in the ASM library by running the following command as root:
    /usr/sbin/oracleasm listdisks
    All the instances of DISKNAME from the previous command(s) are displayed.

    To delete an ASM disk,run the following command:

    /usr/sbin/oracleasm deletedisk DISKNAME

    To discover the Oracle ASM disks on other nodes in the cluster, run the following command on the remaining cluster nodes:
    /usr/sbin/oracleasm scandisks.

    Installing Oracle 12c Grid Infrastructure for a Cluster

    This section gives you the installation information of Oracle 12c grid infrastructure for a cluster.

    Before You Begin

    Before you install the Oracle 12c RAC software on your system:

    • Ensure that you have already configured your operating system, network, and storage based on the steps from the previous sections within this document.
    • Locate your Oracle 12c media kit.

      [top]


    Configure the System Clock Settings for All Nodes

    To prevent failures during the installation procedure, configure all the nodes with identical system clock settings. Synchronize your node system clock with the Cluster Time Synchronization Service (CTSS) which is built in Oracle 12c. To enable CTSS, disable the operating system network time protocol daemon (ntpd) service using the following commands in this order:

    1.     service ntpd stop

    2.     chkconfig ntpd off

    3.     mv /etc/ntp.conf /etc/ntp.conf.orig

    4.     rm /var/run/ntpd.pid

    Configuring Node One

    The following steps are for node one of your cluster environment, unless otherwise specified.

    a.     Log in as root. 

    b.     If you are not in a graphical environment, start the X Window System by typing: startx

    c.     Open a terminal window and type: xhost + 

    d.     Mount the Oracle Grid Infrastructure media. 

    e.     Log in as grid user, for example: su - grid. 

    f.      Type the following command to start the Oracle Universal Installer:<CD_mountpoint>/runInstaller

    In the Download Software Updates window, enter your My Oracle Support credentials to download the latest patch updates. If you choose not to download the latest patches, select Skip software updates and click Next.

     

     

     

    g.     In the Select Installation Option window, select Install and Configure Grid Infrastructure for a Cluster and click Next.

     

     


    h.     In the Select Cluster Type window, select Configure a Flex Cluster, and click Next.





    i.      In the Select Product Languages window, select English, and click Next.

    j.      In the Grid Plug and Play Information window, enter the following information:

    • §  Cluster
      • §  Cluster Name—Enter a name for your cluster.
      • §  SCAN
      • §  SCAN Name—Enter the named registered in the DNS server which is unique for the entire cluster. For more details on setting up your SCAN name see, "IP Address and Name Resolution Requirements".
      • §  SCAN
      • §  SCAN Port—Retain the default port of 1521.
      • §  Configure
      • §  Configure GNS—Check this option and select Configure nodes Virtual IPs as assigned by the Dynamic Networks And select Create a new GNS and enter the GNS VIP Address for the cluster and GNS Sub Domain mentioned in the DNS server and Click Next





    k.     In the Cluster Node Information window, click Add to add additional nodes that must be managed by the Oracle Grid Infrastructure.

    • §  Enter
      • §  Enter the public Hostname information for Hub and Leaf cluster member nodes
      • §  Enter
      • §  Enter the Role of Cluster member node
      • §  Repeat
      • §  Repeat step ‘k’ for each node within your cluster





      l.      Click SSH Connectivity and configure your passwordless SSH connectivity by entering the OS Password for the grid user and click Setup.





      m.    Click Ok and then click Next to go to the next window.

      n.     In the Specify Network Interface Usage window, make sure that the correct interface usage types are selected for the interface names. From the ‘Use for’ drop-down list, select the required interface type. The available options are Public, Private, ASM, ASM and Private and Do Not Use. Click Next.



      o.     In the Grid Infrastructure Management Repository Option window select Yes for Configure Grid Infrastructure Management and click Next.

      p.     In the Storage Option Information window, select Automatic Storage Management (ASM) and click Next.

      q.     In the Create ASM Disk Group window, enter the following information:

      • §  ASM
        • §  ASM diskgroup— Enter a name, for example: OCR_VOTE
        • §  Redundancy
        • §  Redundancy— For your OCR and voting disks, select High if five ASM disks are available, select Normal if three ASM disks are available, or select External if one ASM disk is available (not recommended).

        NOTE: for Oracle Linux 6 (RHEL compatible kernel) If no candidate disks are displayed, click Change Discovery Path and enter ORCL:*or /dev/oracleasm/disks/*. Ensure that you have marked your Oracle ASM disks, for more informations see,"Using ASMLib to Mark the Shared Disks as Candidate Disks".

        NOTE: For RHEL 6, If no candidate disks are displayed, click Change Discovery Path and enter /dev/mapper/*.


         

         

        r.    In the Specify ASM Password window, choose the relevant option under Specify the passwords for these accounts and enter the relevant values for the password. Click Next.





        s.     In the Failure Isolation Support window, select Do Not use Intelligent Platform Management Interface (IPMI).







        t.      In the Privileged Operating Systems Groups window, select:

        • o    asmdba for Oracle ASM DBA (OSASM) Group
        • o    asmoper for Oracle ASM Operator (OSOPER) Group
        • o    asmadmin for Oracle ASM Administrator (OSDBA) Group





        u.     In the Specify Installation Location window, specify the values of your Oracle
        Base and Software Location as configured within the Dell Oracle utilities RPM.
        NOTE: The default locations used within the Dell Oracle utilites RPM are:

        • o    Oracle Base -/u01/app/grid/u01/app/grid
        • o    Software Location - /u01/app/12.1.0/grid





        v.     In the Create Inventory window, specify the location for your Inventory Directory. Click Next.






        NOTE:The default location based on the Dell Oracle utilites RPM for Inventory Directory is /u01/app/oraInventory

        w.    In the Root script execution configuration window, select Automatically run configuration scripts and provide the enter the password for root user  and click Next. 


          




        x.     In the Summary window, verify all the settings and select Install.







        y.     In the Install Product window check the status of the Grid Infrastructure Installation.


          


        z.     After the installation is complete, click Yes for the Configuration scripts to run by privileged user root in the poped up window.

         

         

         

         

        In the Finish window, click Close.
        [top]


        Installing Oracle 12c Database (RDBMS) Software

        The following steps are for node 1 of your cluster environment, unless otherwise specified.

        1.     Log in as root and type: xhost +.

        2.     Mount the Oracle Database 12c media. 

        3.     Log in as Oracle user by typing: 
        su - oracle 

        4.     Run the installer script from your Oracle database media:
        <CD_mount>/runInstaller

        5.     In the Configure Security Updates window, enter your My Oracle
        Support credentials to receive security updates, else click Next.



        6.     In the Download Software Updates window, enter your My Oracle Support credentials to download patch updates available after the initial release. If you choose not to update at this time, select Skip software updates and click Next.



        7.     In the Select Installation Option window, select Install database software only.



        8.     In the Grid Installation Options window:

        • Select Oracle Real Application Clusters database installation and click Next.

         

       

       

      9.     In the Select List of Nodes window select all the Hub nodes and omit Leaf nodes and Click SSH Connectivity and configure your passwordless SSH connectivity by entering the OS Password for the oracle user and selecting Setup. Click Ok and click Next to go the next window.












      10.   In the Select Product Lanaguages window, select English as the Language Option and click Next. 

      11.   In the Select Database Edition window, select Enterprise Edition and click Next.





      12.   In the Installation Location window,

       Specify the location of your Oracle Base configured within the Dell oracle utilities RPM.

      NOTE: The default locations used within the Dell Oracle utilites RPM are as follows:




       13.   In the Privileged Operating System Groups window, select dba for Database Administrator (OSDBA) group, asmoper for Database Operator (OSOPER) group, backupdba for Database Backup and Recovery (OSBACKUPDBA) group, dgdba for Data Guard administrative (OSDGDBA) group and kmdba for Encryption Key Management administrative (OSKMDBA) group and click Next.





      14.   In the Summary window verify the settings and select Install.





      15.   On completion of the installation process, the Execute Configuration scripts wizard is displayed. Follow the instructions in the wizard and click Ok.





      NOTE: Root.sh should be run on one node at a time.

      16.   In the Finish window, click Close.

      [top]

      Creating Diskgroup Using ASM Configuration Assistant (ASMCA)

      This section contains procedures to create the ASM disk group for the database files and Flashback Recovery Area (FRA).

      1.     Log in as grid user. 

      2.     Start the ASMCA utility by typing:
      $<GRID_HOME>/bin/asmca

      3.     In the ASM Configuration Assistant window, select the Disk Groups tab.

      4.     Click Create.



      5.     Enter the appropriate Disk Group Name, for example: DATA.

      6.     Select External for Redundancy.

      7.     Select the appropriate member disks to be used to store your database
      files, for example: ORCL:DATA


        




      NOTE: If no candidate disks are displayed, click Change Discovery Path and type: ORCL:* or /dev/oracleasm/disks/*

      NOTE:
      Please ensure you have marked your Oracle ASM disks. For more information, see "Using ASMLib to Mark the Shared Disks as Candidate Disks".

      8.     Click Show Advanced Options and select the appropriate Allocation Unit Size and specify the minimum software versions for ASM, Database and ASM volumes and click OK to create and mount the disks.

       

      9.     Repeat step 4 to step 8 to create another disk group for your Flashback Recovery Area (FRA). NOTE: Make sure that you label your FRA disk group differently than your database disk group name. For labeling your Oracle ASM disks, see "Using ASMLib to Mark the Shared Disks as Candidate Disks" .

      Click Exit to exit the ASM Configuration Assistant.

      Creating Database Using DBCA

      The following steps are applicable for node 1 of your cluster environment, unless otherwise specified:

      1.     Login as oracle user.

      2.     From $<ORACLE_HOME>, run the DBCA utility by typing:
      $<ORACLE_HOME>/bin/dbca &

      3.     In the Welcome window, select Create Database and click Next.



      4.     In the Creation Mode window, select Advanced Mode, and click Next.



      5.     In the Database Template window, Select Oracle Real Application Cluster(RAC) database in the Database type and Select Admin-Managed for Configuration Type and Select Template, and click Next. 


        


      6.     In the Database Identification window:

      1. 1.      
        1. a.     a.   Enter appropriate values for Global Database Name and SID Prefix.
        2. b.     b.   Select Create As Container Database and specify number of PDBs and PDB Name Prefix. Click Next

       

     

                 In the Database Placement window select all the available Hub nodes and click Next.

     

     

    8.     In the Management Options window, select Configure Enterprise Manager(EM) Database Express and Run Cluster Verification Utility(CVU) Checks Periodically and click Next. 


      


    9.     In the Database Credentials window, enter the appropriate credentials for your database. 

                

              

    10.   In the Storage Locations window, select:

    1. 1.      
      1. a.     a.     Automatic Storage Management (ASM) for Storage Type.
      2. b.     b.     Use Oracle-Managed Files for Storage Location.
      3. c.     c.     Browse to select the ASM disk group that you created to store the database files for Database Area.
      4. d.     d.     Select Specify Flash Recovery Area.
      5. e.     e.     Browse and select the ASM disk group that you created for Flash Recovery Area.
      6. f.      f.      Enter a value for Flash Recovery Area Size.
      7. g.     g.     Select Enable Archiving.
      8. h.     h.     Click Next 

              
      

           

11.   In the Database Options window, click Next.

12.   In the Initialization Parameters window:

  1. 1.      
    1. a.     a.     Select Custom Settings.
    2. b.     b.     For the Memory Management, select Automatic Shared Memory Management.
    3. c.     c.     Specify appropriate values for the SGA Size and PGA Size.
    4. d.     d.   Click Next.

        

        

13.   In the Creation Options window, click Finish. 


  




14.   In the Summary window, click Finish to create database.






NOTE: Database creation can take some time to complete.

Click Exit on the Database Configuration Assistant window after the database creation is complete.


Viewing all articles
Browse latest Browse all 2350


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>