ASM and ACFS
Storage Conversion for Member Clusters
You can use ASMCMD commands to administer the
configuration of member clusters. For example, you can change the storage
method from direct Oracle ASM to indirect Oracle ASM, or change from indirect
Oracle ASM to direct Oracle ASM.
ASM Data Reliability Enhancements
This enhancements represents two changes. The first
extends the default disk failure timeout interval (DISK_REPAIR_TIME) from
3.6 hours to 12 hours. In many environments, 12 hours is better suited for safe
guarding against data loss because of multiple disk failures, and at
the same time, reducing unnecessary overhead from prematurely dropping a disk
during a transient failure. The second enhancement provides a new Disk Group
Attribute called CONTENT_HARDCHECK.ENABLED that allows optionally
enabling or disabling Hardware Assisted Redundancy Data (HARD) checking in
Exatadata environments.
These two enhancements provide Exadata customers
greater control for how ASM provides essential data protection. Specifically
Hardware Assisted Redundancy Data checking and automatic dropping of failed
disks from an ASM Disk Group.
ASM Database Cloning
ASM database cloning provides cloning of Multitenant
databases (PDBs). This feature works by leveraging ASM redundancy.
Previously, as a protection against data loss during hardware failure, ASM
provided up to two additional redundant copies of a file’s extents. Flex Disk
Groups now can provide up to five redundant copies, in which one or more of the
copies can be split off to provide a near instantaneous replica.
The advantage of ASM database cloning, when compared
with storage array-based replication, is that ASM database clones replicate
complete databases (PDBs) rather than files or blocks of physical storage.
Storage array or file system-based replication, in a database environment,
requires coordination between database objects being replicated with the
underlying technology doing the replication. With ASM database clones, the administrator
does not need to understand the physical storage layout. This is another aspect
of database-oriented storage management provided with ASM Flex Disk Groups.
Dropping Oracle ASM File Groups With a Cascade
Option
You can drop a file group and its associated files
(drop including content) using the CASCADE keyword with ALTER DISKGROUP ...
DROP FILEGROUP SQL statement.
Converting Normal or High Redundancy Disk Groups to
Flex Disk Groups without Restricted Mount
You can convert a conventional disk group (disk group
created before Oracle Database18c) to an Oracle ASM flex disk group without
using the restrictive mount (MOUNTED RESTRICTED) option.
ASM Flex Disk Groups provides several new
capabilities such as quota management and database cloning. In Oracle
18c customers migrating from a Normal or High Redundancy Disk Group
environments will benefit by having a seamless means for converting existing
Disks Groups to Flex Disk Groups. Before 18c, customers migrating Disk Groups
had to have the Disk Groups mounted in a restricted mode that prevented
any configuration change during the transition.
Oracle ACFS Remote Service for Member Clusters
In addition to support for Oracle member clusters
with attached local storage, Oracle ACFS provides Oracle ACFS remote service
for native Oracle ACFS functionality on member clusters with no attached local
storage (indirect storage member clusters). Utilizing an Oracle ACFS deployment
on the Oracle Domain Services Cluster (DSC), Oracle ACFS remote service can be
used for both Oracle Application Clusters and database member clusters to
enable a flexible and file system-based deployment of applications and databases.
Unlike NFS-based exports, Oracle ACFS remote service fully supports advanced
Oracle ACFS features; such as replication, snapshots, and tagging; on the
destination member cluster.
Cluster Health Advisor
Cluster Health Advisor Cross Database Analysis
Support
In consolidated and DBaaS private cloud deployments
multiple databases are sharing the same physical server and its resources. In
its previous release Cluster Health Advisor analyzed each hosted database
instance individually and could only detect whether the cause of a performance
or availability issue was within itself or external. Which its new cross
database analysis support, external issues can be targeted to a specific
database resulting in higher confidence diagnosis and improved corrective
actions.
Early warnings, targeted diagnosis and corrective
actions are critical capabilities for modern database deployments designed to
be available and performant 24x7. Consolidated and DBaaS private clouds are
particularly difficult due to interactions between databases sharing the same
physical resources and the one to many DBA to DB staffing within these
deployments. Oracle Cluster Health Advisor now supports analyzing these
complex multi-database environments. By surfacing early warning notifications with
specific database cause and corrective action speeds triage and allows admins
to proactively maintain availability and performance saving IT staffing and
downtime dollars.
Cluster Health Advisor Cross Cluster Analysis
Support
In its previous release, Oracle Cluster Health
Advisor analyzed each cluster node individually. Oracle Cluster Health Advisor
could only detect whether the cause of a performance or availability issue was
within itself, or external. With the new cross-cluster analysis support, Oracle
Cluster Health Advisor can target external issues to a specific cluster node
resulting in higher confidence diagnosis and improved corrective actions.
Oracle Cluster Health Advisor's support for targeting
the cause of database or cluster performance degradation or impending problems
to a specific root cause on a specific node, greatly improves the response time
to apply corrective actions and prevent loss of databases availability or
violations of SLAs.
General
Shared Single Client Access Names
A shared single client access name (SCAN) enables the
sharing of one set of SCAN virtual IPs (VIPs) and Listeners (referred to
as the SCAN setup) on one dedicated cluster in the data center with other
clusters to avoid the deployment of one SCAN setup per cluster, which not only
reduces the number of SCAN-related DNS entries, but also the number of
VIPs that need to be deployed for a cluster configuration.
A shared SCAN simplifies the deployment and
management of groups of clusters in the data center by providing a shared SCAN
setup that can be used by multiple systems at the same time.
NodeVIP-Less Cluster
NodeVIP-Less Cluster enables the configuration of a
cluster without the need to explicitly configure nodevips on the public
network. While the VIP resources on Clusterware level will still be maintained,
there is no need to provision additional IPs for each node in the cluster,
which in larger cluster estates can potentially save hundreds of IPs per
subnet.
NodeVIP-Less Cluster simplifies cluster deployments
and management by eliminating the need for additional IPs per node in the
cluster.
Cluster Domain Proxies
Cluster domain proxies provide resource state change
notifications from one cluster to another, and enable resources in one cluster
to act on behalf of dependencies on resources in another cluster. You can use
cluster domain proxies, for example, to ensure that an application in an Oracle
Application Member Cluster only starts if its associated database hosted in an
Oracle Database Member Cluster is available. Similarly, you can use cluster
domain proxies to ensure that a database in an Oracle Database Member Cluster
only starts if at least one Oracle Automatic Storage Management (Oracle ASM)
instance on the Domain Services Cluster is available.
Cluster Dependency Proxies simplify
manageability and increase availability for applications running on distributed
infrastructures spanning multiple clusters.
gridSetup-based Management
Gold image-based installation, using gridSetup.sh or gridSetup.bat,
replaces the method of using Oracle Universal Installer for installing Oracle
Grid Infrastructure. You can use gridSetup-based management to perform
management tasks such as cloning, addNode operations, deleteNode operations, and downgrade using the gridSetup.sh or
the gridSetup.bat command.
gridSetup-based management simplifies deployment and
deployment-related management tasks with a unified and simple tool.
Reader Nodes Performance Isolation
In the Reader Nodes architecture, the updates made on
the read-write instances on the Hub nodes are immediately propagated to the
read-only instances on the Leaf nodes, where they can be used for online
reporting or instant queries. Reader Nodes Performance Isolation enables OLTP
workload on Hub nodes to continue although the associated database instances on
the Leaf nodes fail to process the updates.
Horizontal scaling using Reader Nodes is further
improved by Reader Nodes Performance Isolation as slow Leaf node-based
instances will neither slow down OLTP workload nor otherwise impact it.
UCP Support for RAC Affinity Sharding
RAC affinity sharding ensures that a group of
related cache entries is contained within a single cache partition. When Data
Affinity is enabled on the Oracle RAC database, then data on the affinitized
tables are partitioned in such a way that a particular partition, or subset of
rows for a table, is affinitized to a particular Oracle RAC database instance.
The improved cache locality
and reduced internode synchronization with Data Affinity
leads to higher application performance and scalability.