9389 Uj 48 HD 1
9389 Uj 48 HD 1
9389 Uj 48 HD 1
1/31/2018
Copyright © 2016, 2018, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement containing
restrictions on use and disclosure and are protected by intellectual property laws. Except as
expressly permitted in your license agreement or allowed by law, you may not use, copy,
reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform,
publish, or display any part, in any form, or by any means. Reverse engineering,
disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to
be error-free. If you find any errors, please report them to us in writing.
U.S. GOVERNMENT END USERS: Oracle programs, including any operating system,
integrated software, any programs installed on the hardware, and/or documentation,
delivered to U.S. Government end users are "commercial computer software" pursuant to the
applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As
such, use, duplication, disclosure, modification, and adaptation of the programs, including any
operating system, integrated software, any programs installed on the hardware, and/or
documentation, shall be subject to license terms and license restrictions applicable to the
programs. No other rights are granted to the U.S. Government.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may
be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC
trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or
registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The
Open Group.
This software or hardware and documentation may provide access to or information about
content, products, and services from third parties. Oracle Corporation and its affiliates are not
responsible for and expressly disclaim all warranties of any kind with respect to third-party
content, products, and services unless otherwise set forth in an applicable agreement
between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any
loss, costs, or damages incurred due to your access to or use of third-party content, products,
or services, except as set forth in an applicable agreement between you and Oracle.
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Oracle customers that have purchased support have access to electronic support through My
Oracle Support. For information, visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
CONTENTS
CHAPTER 3 Audit 45
Overview of Audit 45
Contents of an Audit Log Event 46
Viewing Audit Log Events 53
Setting Audit Log Retention Period 57
GLOSSARY 1483
RELEASE NOTES 1497
If you're new to Oracle Cloud Infrastructure and would like to learn some key concepts and
take a quick tutorial, see the Oracle Cloud Infrastructure Getting Started Guide.
If you're ready to create cloud resources such as users, access controls, cloud networks,
instances, and storage volumes, this guide is right for you. It provides the following
information about using Oracle Cloud Infrastructure:
For a description of the terminology used throughout this guide, see the GLOSSARY.
Need API Documentation?
For general information, see About the API. For links to the detailed service
API documentation, see the online help at https://docs.us-phoenix-
1.oraclecloud.com/#dochome.htm.
Security Credentials
The types of credentials you'll use when working with Oracle Cloud Infrastructure.
Resource Identifiers
A description of the different ways your Oracle Cloud Infrastructure resources are identified.
Resource Tags
Information about Oracle Cloud Infrastructure tags and how to apply them to your resources.
Service Limits
A list of the default limits applied to your cloud resources and how to request an increase.
Security Credentials
This section describes the types of credentials you'll use when working with Oracle Cloud
Infrastructure.
Console Password
l What it's for: Using the Console.
l Format: Typical password text string.
l How to get one: An administrator will provide you with a one-time password.
l How to use it: Sign in to the Console the first time with the one-time password, and
then change it when prompted. Requirements for the password are displayed there. The
one-time password expires in seven days. If you want to change the password later,
see To change your Console password. Also, you or an administrator can reset the
password in the Console or with the API (see To create or reset a user's Console
password). Resetting the password creates a new one-time password that you'll be
prompted to change the next time you sign in to the Console. If you're blocked from
signing in to the Console because you've tried 10 times in a row unsuccessfully, contact
your administrator.
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAoTFqF...
...
Swift Password
l What it's for: Using a Swift client to access Object Storage for the purposes of backing
up an Oracle Database System (DB System) database.
l Format: Typical password text string.
l How to get one: See Working with Swift Passwords.
l How to use it: Sign in to your Swift client with your Oracle Cloud Infrastructure
Console login, your Swift password provided by Oracle, and your organization's Oracle
tenant name.
Availability domains are isolated from each other, fault tolerant, and very unlikely to fail
simultaneously. Because availability domains do not share infrastructure such as power or
cooling, or the internal availability domain network, a failure at one availability domain is
unlikely to impact the availability of the others.
All the availability domains in a region are connected to each other by a low latency, high
bandwidth network, which makes it possible for you to provide high-availability connectivity
to the Internet and customer premises, and to build replicated systems in multiple availability
domains for both high-availability and disaster recovery.
Regions are completely independent of other regions and can be separated by vast
distances—across countries or even continents. Generally, you would deploy an application in
the region where it is most heavily used, since using nearby resources is faster than using
distant resources. However, you can also deploy applications in different regions to:
l mitigate the risk of region-wide events, such as large weather systems or earthquakes
l meet varying requirements for legal jurisdictions, tax domains, and other business or
social criteria
Resource Availability
The following sections list the resource types based on their availability: global across
regions, within a single region, or within a single availability domain.
Global Resources
l groups
l policies
l users
Regional Resources
l buckets: Although buckets are regional resources, they can be accessed from any
location if you use the correct region-specific Object Storage URL for the API calls.
l customer-premises equipment (CPE)
l DHCP options sets
l dynamic routing gateways (DRGs)
l images
l internet gateways
l load balancers
l local peering gateways (LPGs)
l reserved public IPs
l route tables
l security lists
l virtual cloud networks (VCNs)
l volume backups: They can be restored as new volumes to any availability domain
within the same region in which they are stored.
l DB Systems
l ephemeral public IPs
l instances: They can be attached only to volumes in the same availability domain.
l subnets
l volumes: They can be attached only to an instance in the same availability domain.
Resource Identifiers
This chapter describes the different ways your Oracle Cloud Infrastructure resources are
identified.
Important: If you use the API, you'll need the OCID for your
tenancy. For information about where to find it, see the next
section. When you create any other resource, you can find its
OCID in the response. You can also retrieve a resource's
OCID by using a "List" API operation on that resource type,
or by viewing the resource in the Console.
l unique ID: The unique portion of the ID. The format may vary depending on the type of
resource or service.
Example OCIDs
l tenancy:
ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqst
ifsfdsq
l instance:
ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrw
vou52gb6s5a
You can find your tenancy's OCID in the Console, in the footer at the bottom of the page. The
tenancy OCID looks something like this (notice the word "tenancy" in it):
ocid1.
tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq.
The name you assign to a user at creation is their login for the Console.
You can use these names instead of the OCID when writing a policy (for example, Allow
group <GROUP NAME> to manage all-resources in compartment <COMPARTMENT NAME>).
In addition to the name, you must also assign a description to each of your IAM resources
(although it can be an empty string). It can be a friendly description or other information that
helps you easily identify the resource. The description does not have to be unique, and you
can change it whenever you like. For example, you might want to use the description to store
the user's email address if you're not already using the email address as the user's unique
name.
Display Name
For most of the Oracle Cloud Infrastructure resources you create (other than those in IAM),
you can optionally assign a display name. It can be a friendly description or other information
that helps you easily identify the resource. The display name does not have to be unique, and
you can change it whenever you like. The Console shows the resource's display name along
with its OCID.
Resource Tags
When you have many resources (for example, instances, VCNs, load balancers, and block
volumes) across multiple compartments in your tenancy, it can become difficult to track
resources used for specific purposes, or to aggregate them, report on them, or take bulk
actions on them. Tagging allows you to define keys and values and associate them with
resources. You can then use the tags to help you organize and list resources based on your
business needs.
Defined tags are set up in your tenancy by an administrator. Only users granted permission
to work with the defined tags can apply them to resources.
Free-form tags can be applied by any user with permissions on the resource.
For more detailed information about tags and their features, see Overview of Tags.
1. Open the Console, go to the details page of the resource you want to tag.
For example, to tag a compute instance: Click Compute to see the list of instances in
your current compartment. Find the instance that you want to tag, and click its name to
view its details page.
2. Click Apply Tags.
3. In the Apply Tags to the Resource dialog:
a. Select the Tag Namespace.
b. Select the Tag Key.
c. Enter a Value.
d. To apply another tag, click +.
e. When finished adding tags, click Apply Tag(s).
To apply a tag to a resource using the API, use the appropriate resource's create or update
operation.
Service Limits
This topic describes the service limits for Oracle Cloud Infrastructure and the process for
requesting a service limit increase.
Oracle Cloud Infrastructure. If you did not establish limits with your Oracle sales
representative, or, if you signed up through the Oracle Store, default or trial limits are set for
your tenancy. You can request to have a service limit raised.
You can view your tenancy's limits and usage by region in the Console. Be aware that:
l The Console may not yet display limits and usage information for all of the Oracle Cloud
Infrastructure services or resources.
l The usage level listed for a given resource type could be greater than the limit if the
limit was reduced after the resources were created.
l If all the resource limits are listed as 0, this means your account has been suspended.
For help, contact Oracle Support.
If you don't yet have a tenancy or a user login for the Console, or if you don't find a particular
limit listed in the Console, see Limits by Service for the default tenancy limits.
Required Permission
1. Open the Console, and then click the name of your tenancy in the top-left corner of the
page.
2. Click Limits on the left side of the page.
Your resource limits and usage for the specific region are displayed, broken out by service. If
a given resource type has limits per availability domain, the limit and usage for each
availability domain is displayed.
l The service or resource that you are requesting the service limit increase for.
For example: Request increase in limit for 256 GB block volumes.
l Requested limit increase.
For example: Increase the service limit to 10 volumes.
l Reason for the request. Describe what you are trying to accomplish with the
increase.
Limits by Service
The following tables list the default limits for each service. Note the scope that each limit
applies to (for example, per availability domain, per region, per tenant, etc.).
Backups 100 20
Compute Limits
Limits apply to each availability domain, unless otherwise noted. There are three availability
domains per region.
Virtual Machines
VM.Standard1.1 5 3
VM.Standard1.2 5 3
VM.Standard1.4 5 Contact Us
VM.Standard1.8 5 Contact Us
VM.Standard1.16 5 Contact Us
VM.DenseIO1.4 5 1
VMDenseIO1.8 5 Contact Us
VMDenseIO1.16 5 Contact Us
VM.Standard2.1 1 1
VM.Standard2.2 1 1
VM.Standard2.4 1 Contact Us
VM.Standard2.8 1 Contact Us
VM.Standard2.16 1 Contact Us
VM.Standard2.24 1 Contact Us
VMDenseIO2.8 1 Contact Us
VMDenseIO2.16 1 Contact Us
VMDenseIO2.24 1 Contact Us
Transfer job 0 transfer jobs per OCI tenancy 0 per OCI tenancy
Contact My Oracle Support to file a service request to increase the service limits for Data
Transfer service. See Requesting a Service Limit Increase for details.
Database Limits
Database limits are per availability domain. There are three availability domains per region.
IAM Limits
IAM limits are global.
Groups in a tenancy 50 50
Compartments in a tenancy 50 50
Statements in a policy 50 50
Networking Limits
Networking service limits apply to different scopes, depending on the resource.
VCN Region 10 10
IP Address Limits
50 egress 50 egress
rules rules
DHCP Option Limits
based interface) or the REST API. Instructions for the Console and API are included in topics
throughout this guide. For a list of available SDKs, see SDKs and Other Tools.
To access the Console, you must use a supported browser (Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
Along with this compartment, Oracle creates the IAM policies to allow Oracle Platform
Services access to the resources.
The compartment that Oracle creates for Oracle Platform Services is named:
ManagedCompartmentForPaaS
The polices that Oracle creates for Oracle Platform Services are:
l PSM-root-policy
This policy is attached to the root compartment of your tenancy.
l PSM-mgd-comp-policy
This policy is attached to the ManagedCompartmentForPaaS compartment.
Note that if you already have a VCN you still must create the IAM policies to allow Oracle
Platform Services access to the resources.
To follow a tutorial on how to set up the prerequisites for Scenario 1, see Creating the
Infrastructure Resources Required for Oracle Platform Services.
Create a compartment
1. In the Oracle Cloud Infrastructure Console, click Identity, and then click
Compartments.
A list of the existing compartments in your tenancy is displayed.
2. Click Create Compartment.
3. Enter the following:
l Name: A unique name for the policy. The name must be unique across all policies
in your tenancy. You cannot change this later.
l Description: A friendly description. You can change this later if you want to.
l Policy Versioning: Select Keep Policy Current if you'd like the policy to stay
current with any future changes to the service's definitions of verbs and
resources. Or if you'd prefer to limit access according to the definitions that were
current on a specific date, select Use Version Date and enter that date in format
YYYY-MM-DD format. For more information, see Policy Language Version.
l Statement: To allow Oracle Platform Services access to use the network in your
compartment, enter the following policy statements. Replace <compartment_
name> with your compartment name. Click + after each statement to add
another.
Allow service PSM to inspect vcns in compartment <compartment_name>
Allow service PSM to use subnets in compartment <compartment_name>
Allow service PSM to use vnics in compartment <compartment_name>
Allow service PSM to manage security-lists in compartment
<compartment_name>
For more information about policies, see Policy Basics and also Policy Syntax.
5. Click Create.
Create a bucket
1. In the Oracle Cloud Infrastructure Console, click Storage, and then click Object
Storage.
2. Choose the compartment you created.
3. Click Create Bucket.
4. In the Create Bucket dialog, enter a bucket name, for example: PaasBucket.
Make a note of the name you enter. You will need it when you create an instance for
your Oracle Platform Service later.
5. Click Create Bucket.
You can use an existing VCN. The VCN must have at least one public subnet. Perform these
tasks to complete the prerequisites:
l Name: A unique name for the policy. The name must be unique across all policies
in your tenancy. You cannot change this later.
l Description: A friendly description. You can change this later if you want to.
l Policy Versioning: Select Keep Policy Current if you'd like the policy to stay
current with any future changes to the service's definitions of verbs and
resources. Or if you'd prefer to limit access according to the definitions that were
current on a specific date, select Use Version Date and enter that date in YYYY-
MM-DD format. For more information, see Policy Language Version.
l Statement: To allow Oracle Platform Services access to use the network, enter
the following policy. Click + after each statement to add another. In each
statement, replace <compartment_name> with the name of the compartment
where your VCN resides.
Allow service PSM to inspect vcns in compartment <compartment_name>
Allow service PSM to use subnets in compartment <compartment_name>
Allow service PSM to use vnics in compartment <compartment_name>
Allow service PSM to manage security-lists in compartment
<compartment_name>
For more information about policies, see Policy Basics and also Policy Syntax.
5. Click Create.
Create a bucket
1. In the Oracle Cloud Infrastructure Console, click Storage, and then click Object
Storage.
2. Choose the compartment you want to create the bucket in.
3. Click Create Bucket.
4. In the Create Bucket dialog, enter a bucket name. Make a note of the name you enter.
You will need it when you create an instance for your Oracle Platform Service later.
5. Click Create Bucket.
Service Documentation
Java Cloud Service About Java Cloud Service Instances in Oracle Cloud Infrastructure
Event Hub Cloud About Oracle Event Hub Cloud Service - Dedicated instances in Oracle
Service Cloud Infrastructure
Data Hub Cloud About Oracle Data Hub Cloud Service Clusters in Oracle Cloud
Service Infrastructure
Big Data Cloud About Big Data Cloud Clusters in Oracle Cloud Infrastructure
Overview of Audit
The Oracle Cloud Infrastructure Audit service automatically records calls to all supported
Oracle Cloud Infrastructure public application programming interface (API) endpoints as log
events. Currently, all services support logging by Audit. Object Storage service supports
logging for bucket-related events, but not for object-related events. Log events recorded by
the Audit service include API calls made by the Oracle Cloud Infrastructure Console,
Command Line Interface (CLI), Software Development Kits (SDK), your own custom clients,
or other Oracle Cloud Infrastructure services. Information in the logs shows what time API
activity occurred, the source of the activity, the target of the activity, what the action was,
and what the response was.
Each log event includes a header ID, target resource(s), time stamp of the recorded event,
request parameters, and response parameters. You can view events logged by the Audit
service by using the Console, API, or the Java SDK. You can view events, copy the details of
individual events, as well as analyze events or store them separately. Data from events can
be used to perform diagnostics, track resource usage, monitor compliance, and collect
security-related events.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
Property Description
Property Description
eventType The type of the event. (Currently, Audit supports only API
activities.)
principalId The OCID of the user whose action triggered the event.
requestAgent The user agent of the client that made the request.
requestParameters The query parameter fields and values for the request.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
"443"
],
"X-Forwarded-For": [
"192.0.2.0"
],
"Opc-Request-Id": [
"0e1e3938-681a-4195-cvb7-35c84311f2ad"
],
"X-Forwarded-Proto": [
"https"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
],
"X-Date": [
"Fri, 06 Jan 2017 02:44:46 GMT"
],
"Referer": [
"https://console.us-phoenix-1.oraclecloud.com/"
]
},
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaacvb24qzjpmwdmt33f3zz5kdh2jpvzm52n5spet3nndh3diys7pca",
"requestId": "0e1e3938-681a-4195-cvb7-
35c84311/4237AD7DCD5647AA9781193B76910E62/C719042D251B473D929928AE903AB5C9",
"eventSource": "CoreServicesPublicApi",
"responseStatus": "200",
"requestParameters": {
"compartmentId": [
"ocid1.compartment.oc1..aaaaaaaacvb24qzjpmwdmt33f3zz5kdh2jpvzm52n5spet3nndh3diys7pca"
]
},
"requestAction": "GET",
"tenantId": "ocidv1:tenancy:oc1:phx:1457636318783:aaaaaaaacvbgrhwsljxg6mk55eo2tfvxwy",
"responseHeaders": {
"Access-Control-Expose-Headers": [
"opc-previous-page,opc-next-page,opc-client-info,ETag,opc-request-id,Location"
],
"Access-Control-Allow-Origin": [
"https://console.us-phoenix-1.oraclecloud.com"
],
"Access-Control-Allow-Credentials": [
"true"
],
"Content-Encoding": [
"gzip"
],
"Vary": [
"Accept-Encoding"
],
"opc-request-id": [
"0e1e3938-681a-4195-cvb7-
35c84311/4237AD7DCD5647AA9781193B76910E62/C719042D251B473D929928AE903AB5C9"
],
"Date": [
"Fri, 06 Jan 2017 02:44:47 GMT"
],
"Content-Type": [
"application/json"
]
},
"principalId": "ocid1.user.oc1..aaaaaaaavephoacvbfuxmqlwb4t7dvik5m2ibuokweo6oadif5pda7nxv2nwp3a",
"requestOrigin": "192.0.2.0",
"eventTime": "2017-01-06T02:44:47.599Z",
"eventId": "d30040ae-1b7c-cvbb-97d2-37da42ea6caf",
"requestResource": "/20160918/instances/"
},
...
{
"requestAgent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0",
"credentialId": "<tenant_id/user_id/fingerprint>",
"responseTime": "2017-01-06T02:45:27.918Z",
"eventType": "ServiceApi",
"requestHeaders": {
"origin": [
"https://console.us-phoenix-1.oraclecloud.com"
],
"Authorization": [
"<authorization_string>"
],
"X-Real-IP": [
"192.0.2.0"
],
"User-Agent": [
"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Firefox/45.0"
],
"Accept-Encoding": [
"gzip, deflate"
],
"Opc-Client-Info": [
"Oracle-HgConsole/0.0.1"
],
"Accept-Language": [
"en-US,en;q=0.5"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-For": [
"192.0.2.0"
],
"Opc-Request-Id": [
"34b5ac1a-e9ee-4c8c-cvb7-be74f6d4fd07"
],
"X-Forwarded-Proto": [
"https"
],
"Accept": [
"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
],
"X-Date": [
"Fri, 06 Jan 2017 02:45:27 GMT"
],
"Referer": [
"https://console.us-phoenix-1.oraclecloud.com/"
],
"Host": [
"iaas.us-phoenix-1.oraclecloud.com"
]
},
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaacvb24qzjpmwdmt33f3zz5kdh2jpvzm52n5spet3nndh3diys7pca",
"requestId": "34b5ac1a-e9ee-4c8c-cvb7-
be74f6d4/77C6104EA72B47BFB39ED6D88AFFB067/0A4B35F3E09C474A96C3EB4C0DCD6BF1",
"eventSource": "CoreServicesPublicApi",
"responseStatus": "200",
"requestParameters": {
"compartmentId": [
"ocid1.compartment.oc1..aaaaaaaacvb24qzjpmwdmt33f3zz5kdh2jpvzm52n5spet3nndh3diys7pca"
]
},
"requestAction": "GET",
"tenantId": "ocidv1:tenancy:oc1:phx:1457636318783:aaaaaaaacvbgrhwsljxg6mk55eo2tfvxwy",
"responseHeaders": {
"Access-Control-Expose-Headers": [
"opc-previous-page,opc-next-page,opc-client-info,ETag,opc-request-id,Location"
],
"Access-Control-Allow-Origin": [
"https://console.us-phoenix-1.oraclecloud.com"
],
"Access-Control-Allow-Credentials": [
"true"
],
"Content-Encoding": [
"gzip"
],
"Vary": [
"Accept-Encoding"
],
"opc-request-id": [
"34b5ac1a-e9ee-4c8c-cvb7-
be74f6d4/77C6104EA72B47BFB39ED6D88AFFB067/0A4B35F3E09C474A96C3EB4C0DCD6BF1"
],
"Date": [
"Fri, 06 Jan 2017 02:45:27 GMT"
],
"Content-Type": [
"application/json"
]
},
"principalId": "ocid1.user.oc1..aaaaaaaavephoacvbfuxmqlwb4t7dvik5m2ibuokweo6oadif5pda7nxv2nwp3a",
"requestOrigin": "192.0.2.0",
"eventTime": "2017-01-06T02:45:27.816Z",
"eventId": "51bfd56b-9574-4ea4-cvbb-536c584792e1",
"requestResource": "/20160918/instances/"
}
]
When viewing events logged by the Audit, you might be interested in specific activities that
happened in the tenancy or compartment and who was responsible for the activity. You will
need to know the approximate time and date something happened and the compartment in
which it happened to display a list of log events that includes the activity in question. List log
events by specifying a time range on the 24-hour clock in Greenwich Mean Time (GMT),
calculating the offset for your local time zone, as appropriate. New activity is appended to the
existing list, usually within 15 minutes of the API call, though processing time can vary.
For administrators: The following policy statement gives the specified group the ability to
view all the Audit event logs in the tenancy:
Allow group Auditors to read audit-events in tenancy
If you're new to policies, see Getting Started with Policies and Common Policies. For more
details about policies for the Audit, see Details for the Audit Service.
In general, the following fields can help you search for a specific event if you know what time
the activity occurred:
l eventTime
l responseTime
For example, a user might report that their attempts to log in began failing at a certain time.
You can search for corresponding operations to confirm the failure and others preceding that
operation to correlate with a reason why.
If you have information about what specific actions occurred in your environment, you can
filter according to one of the following fields:
l requestAction
l requestParameters
l requestResource
l responseStatus
For example, when an instance is deleted, you can search for the instance ID in the
requestResource field along with a DELETE operation in the requestAction field.
Or, if you know who performed the actions in question, you might be interested in the values
in one or more of the following fields:
l principalId
l requestAgent
l compartmentId
The principalId shows the unique Oracle-assigned identifier (OCID) of the user making the
call. If you want to know what activities a specific user has been performing, search for
events where their OCID appears in the principalId field.
The list is updated to include only log events that were processed within the time range you
specified. If an event occurred in the recent past, you might have to wait to see it in the list.
The service typically requires up to 15 minutes for processing. If there are more than 20
results for the specified time range, you can click the right arrow next to the page number to
advance to the next page of log events.
If you want to view all the key-value pairs in a log event, see To view the details of a log
event.
l In the log event row, click the Actions ( ) icon, and then click Details.
l In the log event row, click the Actions ( ) icon, and then click Copy.
The log event is copied to your clipboard. The Audit logs events in JSON format. You can paste
the log event details into a text editor to save and review later or to use with standard log
analysis tools.
Note: This is a query API. Do not use this API for bulk-export
operations.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l ListEvents
Retention period is a tenancy-level setting. The value of the retention period setting affects all
regions and all compartments. You can't set different retention periods for different regions or
compartments.
Use the following operations to manage the log retention period configuration:
l GetConfiguration
l UpdateConfiguration
The components required to create a volume and attach it to an instance are briefly described
below.
l Instance: A bare metal or virtual machine (VM) host running in the cloud.
l iSCSI: A TCP/IP-based standard used for communication between a volume and
attached instance. See iSCSI Commands and Information for more information.
l Volume: There are two types of volumes:
Block volume: A detachable block storage device that allows you to dynamically
expand the storage capacity of an instance.
o
Boot volume: A detachable boot volume device that contains the image used to
boot a Compute instance. See Boot Volumes for more information.
o
A common usage of Block Volume is adding storage capacity to an Oracle Cloud Infrastructure
instance. Once you have launched an instance and set up your cloud network, you can create a
block storage volume through the Console or API. Once created, you attach the volume to an
instance using a volume attachment. Once attached, you connect to the volume from your
instance's guest OS using iSCSI. The volume can then be mounted and used by your instance.
A Block Volume volume can be detached from an instance and moved to a different instance
without loss of data. This data persistence allows you to easily migrate data between
instances and ensures that your data is safely stored, even when it is not connected to an
instance. Any data will remain intact until you reformat or delete the volume.
To move your volume to another instance, unmount the drive from the initial instance,
terminate the iSCSI connection, and attach it to the second instance. From there, you simply
connect and mount the drive from that instance's guest OS to instantly have access to all of
your data.
Additionally, Block Volume volumes offer a high level of data durability compared to standard,
attached drives. All volumes are automatically replicated for you, helping to protect against
data loss.
When you terminate an instance, you can keep the associated boot volume and use it to
launch a new instance using a different instance type or shape. See Launching an Instance for
how to launch an instance based on a boot volume. This allows you to easily switch from a
bare metal instance to a VM instance and vice versa, or scale up or down the number of cores
for an instance.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
Encryption
Block Volume uses the Advanced Encryption Standard (AES) algorithm with 256 bit key for
encryption. Block volumes are encrypted at rest. Backups are also encrypted.
See Service Limits for a list of applicable limits and instructions for requesting a limit
increase.
Boot Volumes
When you launch a virtual machine (VM) or bare metal instance based on an Oracle-provided
image or custom image, a new boot volume for the instance is created in the same
compartment. That boot volume is associated with that instance until you terminate the
instance. When you terminate the instance, you can preserve the boot volume and its data,
see Terminating an Instance. This feature gives you more control and management options
for your compute instance boot volumes, and enables:
l Instance scaling: When you terminate your instance, you can keep the associated
boot volume and use it to launch a new instance using a different instance type or
shape. See Launching an Instance for how to launch an instance based on a boot
volume. This allows you to switch easily from a bare metal instance to a VM instance
and vice versa, or scale up or down the number of cores for an instance.
l Troubleshooting and repair: If you think a boot volume issue is causing a compute
instance problem, you can stop the instance and detach the boot volume. Then you can
attach it to another instance as a data volume to troubleshoot it. After resolving the
issue, you can then reattach it to the original instance or use it to launch a new instance.
Boot volumes are encrypted by default, the same as other block storage volumes. For more
information, see Overview of Block Volume.
For more information about the Block Volume service and boot volumes, see the Block
Volume FAQ.
For administrators: The policy in Let Users Launch Instances includes the ability to list boot
volumes. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes, boot volumes, and backups, but not launch
instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l BootVolume
l ListBootVolumes
l GetBootVolume
l UpdateBootVolume
l DetachBootVolume
l DeleteVolume
l BootVolumeAttachment
l AttachBootVolume
l GetBootVolumeAttachment
l ListBootVolumeAttachments
To enhance security, Oracle enforces an iSCSI security protocol called CHAP that provides
authentication between the instance and volume.
l IP address
l Port
l CHAP user name and password (if enabled)
l IQN
The Console provides this information on the details page of the volume's attached instance.
Click the Actions icon ( ) on your volume's row, and then click iSCSI Information. The
system also returns this information when the AttachVolume API operation completes
successfully. You can re-run the operation with the same parameter values to review the
information.
Additional Reading
There is a wealth of information on the internet about iSCSI and CHAP. If you need more
information on these topics, try the following pages:
l What is iSCSI?
l Oracle Linux Administrator's Guide - About iSCSI Storage
l Troubleshooting iSCSI Configuration Problems
l iscsiadm Basics
Creating a Volume
You can create a volume using Block Volume.
For administrators: The policy in Let Volume Admins Manage Block Volumes and Backups lets
the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
l CreateVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
Attaching a Volume
You can attach a volume to an instance in order to expand the available storage on the
instance. Once attached, you must still connect and mount the volume from the instance for
the volume to be usable. For more information, see Connecting to a Volume.
For administrators: The policy in Let Users Launch Instances includes the ability to
attach/detach existing block volumes. The policy in Let Volume Admins Manage Block
Volumes and Backups lets the specified group do everything with block volumes and backups,
but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
5. Click Attach.
You can connect to the volume once the volume's icon no longer lists it as Attaching.
For more information, see Connecting to a Volume.
l AttachVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
/etc/fstab Options
Block volumes on Oracle Cloud Infrastructure use iSCSI to connect to instances. On Linux
instances, if you want to mount automatically the iSCSI volumes on instance boot, you need
to set some specific options in the /etc/fstab file, or the instance may fail to launch. The
launch failure can occur because the operating system tries to mount the volume before the
iSCSI initiator has started. This topic covers the options to use.
Volume UUIDs
On Linux operating systems, the order in which iSCSI volumes are attached is non-
deterministic, so it can change with each reboot. If you refer to a volume using the device
name, such as /dev/sdb, and you have more than one non-root iSCSI volume, you can't
guarantee that the volume you intend to mount for a specific device name will be the volume
mounted.
To prevent this issue, specify the volume UUID in the /etc/fstab file instead of the device
name. When you use the UUID, the mount process matches the UUID in the superblock with
the mount point specified in the /etc/fstab file. This process guarantees that the same
volume is always mounted to the same mount point.
The root volume in this output is /dev/sda*. The additional remote iSCSI volumes are:
l /dev/sdb
l /dev/sdc
l /dev/sdd
By default, the /etc/fstab file is processed before the iSCSI initiator starts. To configure the
mount process to initiates iSCSI before the volumes are mounted, specify the _netdev option
on each line of the /etc/fstab file.
When you create a custom image of an instance where the volumes, excluding the root
volume, are listed in the /etc/fstab file, instances will fail to launch from the custom image.
Specify the nofail option in the /etc/fstab file to prevent this issue.
In the example scenario with three volumes, the /etc/fstab file entries for the volumes with
the _netdev and nofail options are as follows:
UUID=699a776a-3d8d-4c88-8f46-209101f318b6 /mnt/vol1 xfs defaults,_netdev,nofail 0 2
UUID=ba0ac1d3-58cf-4ff0-bd28-f2df532f7de9 /mnt/vol2 xfs defaults,_netdev,nofail 0 2
UUID=85566369-7148-4ffc-bf97-50954cae7854 /mnt/vol3 xfs defaults,_netdev,nofail 0 2
Once you have updated the /etc/fstab file, use the following command to mount the
volumes:
bash-4.2$ sudo mount -a
Reboot the instance to that confirm the volumes are mounted properly on reboot with the
following command:
bash-4.2$ sudo reboot
If the instance fails to reboot after you update the /etc/fstab file, you may need to undo the
changes to the /etc/fstab file. To update the file, connect to the serial console for the
instance using the steps described in Instance Console Connections. Once you have access to
the instance using the serial console connection, you can remove, comment out, or fix the
changes you made to the /etc/fstab file.
You can also reattach a boot volume to the associated instance. If you want to restart an
instance with a detached boot volume, you must reattach the boot volume using the steps
described in this topic.
For administrators: The policy in Let Users Launch Instances includes the ability to
attach/detach existing block volumes. The policy in Let Volume Admins Manage Block
Volumes and Backups lets the specified group do everything with block volumes and backups,
but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
You can start the instance once the boot volume's icon no longer lists it as ATTACHING.
For more information, see Stopping and Starting an Instance.
l AttachBootVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
Connecting to a Volume
You can connect a volume to an instance's guest OS. In order to connect the volume, you must
first attach the volume to the instance. For additional information, see Attaching a Volume.
Prerequisites
You must attach the volume to the instance before you can connect the volume to the
instance's guest OS. For details, see Attaching a Volume.
l iSCSI IP Address
l iSCSI Port numbers
l CHAP credentials (if you enabled CHAP)
l IQN
The Console provides the commands required to configure, authenticate, and log on to iSCSI.
7. You can now format (if needed) and mount the volume. To get a list of mountable iSCSI
devices on the instance, run the following command:
fdisk -l
9. Make sure that the Add this connection to the list of favorite targets check box
is selected, and then click OK.
10. You can now format (if needed) and mount the volume. To view a list of mountable
iSCSI devices on your instance, in Server Manager, click File and Storage
Services, and then click Disks.
The disk is displayed in the list.
Listing Volumes
You can list all Block Volume volumes in a specific compartment, as well as detailed
information on a single volume.
For administrators: The policy in Let Users Launch Instances includes the ability to list
volumes. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
List Volumes:
l ListVolumes
l GetVolume
For administrators: The policy in Let Users Launch Instances includes the ability to list
volumes. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
The instance associated with the boot volume is listed in the Attached Instance field. If the
value for this field displays:
None in this Compartment.
the boot volume has been detached from the associated instance, or the instance has been
terminated while the boot volume was preserved.
l ListBootVolumes
l GetBootVolume
For administrators: The policy in Let Users Launch Instances includes the ability to list volume
attachments. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
List Attachments:
l ListVolumeAttachments
l GetVolumeAttachment
For administrators: The policy in Let Users Launch Instances includes the ability to list volume
attachments. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l ListBootVolumeAttachments
l GetBootVolumeAttachment
Renaming a Volume
You can use the API to change the display name of a Block Volume volume.
For administrators: The policy in Let Users Launch Instances includes the ability to rename
block volumes. The policy in Let Volume Admins Manage Block Volumes and Backups lets the
specified group do everything with block volumes and backups, but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l UpdateVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l Creating multiple copies of the same volume. Backups are highly useful in cases where
you need to create many instances with many volumes that need to have the same data
formation.
l Taking a snapshot of your work that you can restore to a new volume at a later time.
l Ensuring you have a spare copy of your volume in case something goes wrong with your
primary copy.
l Before creating a backup, you should ensure that the data is consistent: Sync the file
system, unmount the file system if possible, and save your application data. Only the
data on the disk will be backed up. When creating a backup, once the backup state
changes from REQUEST_RECEIVED to CREATING, you can return to writing data to the
volume. While a backup is in progress, the volume that is being backed up cannot be
deleted.
l If you want to attach a restored volume that has the original volume attached, be aware
that some operating systems do not allow you to restore identical volumes. To resolve
this, you should change the partition IDs before restoring the volume. How to change an
operating system's partition ID varies by operating system; for instructions, see your
operating system's documentation.
l You should not delete the original volume until you have verified that the backup you
created of it completed successfully.
See Backing Up a Volume and Restoring a Backup to a New Volume for more information.
Backing Up a Volume
You can create a backup of a volume using Block Volume.
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Volume Admins Manage Block Volumes and Backups lets
the specified group do everything with block volumes and backups. The policy in Let Volume
Backup Admins Manage Only Backups further restricts access to just creating and managing
backups.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l CreateVolumeBackup
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
For more information about backups, see Overview of Block Volume Backups and Restoring a
Backup to a New Volume.
For administrators: The policy in Let Volume Admins Manage Block Volumes and Backups lets
the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
For more information about backups, see Overview of Block Volume Backups and Backing Up
a Volume.
Cloning a Volume
You can create a clone from a volume using the Block Volume service. Cloning enables you to
make a copy of an existing block volume without needing to go through the backup and
restore process. For more information about volume backups, see Overview of Block Volume
Backups and Backing Up a Volume. For more information about the Block Volume service and
cloned volumes, see the Block Volume FAQ.
A cloned volume is a point-in-time direct disk-to-disk deep copy of the source volume, so all
the data that is in the source volume when the clone is created is copied to the clone volume.
Any subsequent changes to the data on the source volume are not copied to the clone. Since
the clone is a copy of the source volume, you cannot change the size, it will be the same size
as the source volume.
The clone operation occurs immediately, and you can attach and use the cloned volume as a
regular volume as soon as the state changes to available. At this point, the volume data is
being copied in the background, and can take up to thirty minutes depending on the size of the
volume.
There is a single point-in-time reference for a source volume while it is being cloned, so if the
source volume is attached when a clone is created, you need to wait for the first clone
operation to complete from the source volume before creating additional clones. If the source
volume is detached, you can create up to ten clones from the same source volume
simultaneously.
You can only create a clone for a volume within the same region, availability domain and
tenant. You can create a clone for a volume between compartments as long as you have the
required access permissions for the operation.
The volume is ready use once its icon lists it as AVAILABLE in the volume list. At this point,
you can perform various actions on the volume such as creating a clone from the volume,
attaching it to an instance, or deleting the volume.
To create a clone from a volume, use the CreateVolume operation and specify
VolumeSourceFromVolumeDetails for CreateVolumeDetails.
3. You can now detach the volume without the risk of losing data.
Detaching a Volume
When an instance no longer needs access to a volume, you can detach the volume from the
instance without affecting the volume's data.
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Users Launch Instances includes the ability to
attach/detach existing block volumes. The policy in Let Volume Admins Manage Block
Volumes and Backups lets the specified group do everything with block volumes and backups,
but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l DetachVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
For administrators: The policy in Let Users Launch Instances includes the ability to
attach/detach existing block volumes. The policy in Let Volume Admins Manage Block
Volumes and Backups lets the specified group do everything with block volumes and backups,
but not launch instances.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
You can now attach the boot volume to another instance, for more information see Attaching a
Volume.
l DetachBootVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
Deleting a Volume
You can delete a volume that is no longer needed.
For administrators: The policy in Let Volume Admins Manage Block Volumes and Backups lets
the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l DeleteVolume
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
For administrators: The policy in Let Volume Admins Manage Block Volumes and Backups lets
the specified group do everything with block volumes and backups.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
The Oracle Cloud Infrastructure Block Volume service lets you dynamically provision and
manage block storage volumes. You can create, attach, connect and move volumes as needed
to meet your storage and application requirements. The Block Volume service uses NVMe-
based storage infrastructure, and is designed for consistency. You just need to provision the
capacity needed and performance scales linearly per GB volume size up to the service
maximums. The following table describes the performance characteristics of the service.
Metric Characteristic
These tests used a wide range of volume sizes and the most common read and write patterns
and were generated with the Gartner Cloud Harmony test suite. To show the throughput
performance limits, 256k or larger block sizes should be used. For most environments, 4K,
8K, or 16K blocks are common depending on the application workload, and these are used
specifically for IOPS measurements.
In the observed performance images in this section, the X axis represents the volume size
tested, ranging from 4KB to 1MB. The Y axis represents the IOPS delivered. The Z axis
represents the read/write mix tested, ranging from 100% read to 100% write.
1 TB Block Volume
A 1 TB volume was mounted to a bare metal instance running in the Phoenix region. The
instance shape was dense, workload was direct I/O with 10GB working set. The following
command was run for the Gartner Cloud Harmony test suite:
~/block-storage/run.sh --nopurge --noprecondition --fio_direct\=1 --fio_size=10g --target /dev/sdb --
test iops --skip_blocksize 512b
The results showed that for 1 TB, the bandwidth limit for the larger block size test occurs at
320MBS.
50 GB Block Volume
A 50 GB volume was mounted to a bare metal instance running in the Phoenix region. The
instance shape was dense, workload was direct I/O with 10GB working set. The following
command was run for the Gartner Cloud Harmony test suite:
~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target /dev/sdb --test
iops --skip_blocksize 512b
The results showed that for the 50 GB volume, the bandwidth limit is confirmed as 24,000
KBPS for the larger block size tests (256 KB or larger block sizes), and the maximum of 3,000
IOPS at 4K block size is delivered. For small volumes, a 4K block size is common.
Twenty 1 TB volumes were mounted to a bare metal instance running in the Ashburn region.
The instance shape was dense, workload was direct I/O with 10GB working set. The following
command was run for the Gartner Cloud Harmony test suite:
~/block-storage/run.sh --nopurge --noprecondition --fio_direct=1 --fio_size=10g --target
/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev
/sdm,/dev/sdn,/dev/sdo,/dev/sdp,/dev/sdq,/dev/sdr,/dev/sds,/dev/sdt,/dev/sdu --test iops --skip_
blocksize 512b
The results showed that for the host maximum test of twenty 1 TB volumes, the average is
2.1GBPS, and 400,000 IOPS to the host for the 50/50 read/write pattern.
Oracle Cloud Infrastructure offers both Bare Metal and Virtual Machine instances:
l Bare Metal - A bare metal compute instance gives you dedicated physical server
access for highest performance and strong isolation.
l Virtual Machine - A Virtual Machine (VM) is an independent computing environment
that runs on top of physical bare metal hardware. The virtualization makes it possible to
run multiple VMs that are isolated from each other. VMs are ideal for running
applications that do not require the performance and resources (CPU, memory, network
bandwidth, storage) of an entire physical machine.
Be sure to review Best Practices for Your Compute Instance for important information about
working with your Oracle Cloud Infrastructure Compute instance.
AVAILABILITY DOMAIN
The Oracle Cloud Infrastructure data center within your geographical region that hosts
cloud resources, including your instances. You can place instances in the same or different
availability domains, depending on your performance and redundancy requirements. For
more information, see Regions and Availability Domains.
TAGS
You can apply tags to your resources to help you organize them according to your
business needs. You can apply tags at the time you create a resource, or you can update
the resource later with the desired tags. For general information about applying tags, see
Resource Tags.
IMAGE
A template of a virtual hard drive that determines the operating system and other
software for an instance. For details about images, see Oracle-Provided Images. You can
also launch instances from custom images or bring your own image.
SHAPE
A template that determines the number of CPUs, amount of memory, and other resources
allocated to a newly created instance. You choose the most appropriate shape when you
launch an instance. The following tables list the available Bare Metal and VM shapes:
VM Shapes
VMs are an option that provides flexibility in compute power, memory capability, and
network resources for lighter applications. You can use Block Volume to add network-
attached block storage as needed.
X7 Shapes Availability
You can optionally attach volumes to an instance. For more information, see Overview of
Block Volume.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
l To attach a volume to an instance, both the instance and volume must be within the
same availability domain.
l Many Compute operations are subject to throttling.
Custom metadata keys (any key you define that is not ssh_authorized_keys or user_data)
have the following limits:
ssh_authorized_keys is a special key that does not have these limits, but its value is
validated to conform to a public key in the OpenSSH format.
user_data has a maximum size of 16KB. For Linux instances with cloud-init configured, you
can populate the user_data field with a Base64-encoded string of cloud-init user data. For
more information on formats that cloud-init accepts, see cloud-init formats. On Windows
instances, the user_data field can be provided but isn't used by Oracle-provided images.
You can provision compute capacity through an easy-to-use web console or an API. The bare
metal compute instance, once provisioned, provides you with access to the host. This gives
you complete control of your instance.
While you have full management authority for your instance, Oracle recommends a variety of
best practices to ensure system availability and top performance.
169.254.0.2, 169.254.2.2-169.254.2.254
For iSCSI connections to the boot and block volumes.
169.254.0.3
For uploads relating to kernel updates. See OS Kernel Updates for more information.
169.254.169.254
For DNS (port 53) and Metadata (port 80) services. See Getting Instance Metadata for more
information.
169.254.169.253
For Windows instances to activate with Microsoft Key Management Service (KMS).
All Oracle-provided images include rules that allow only "root" on Linux instances or
"Administrators" on Windows Server 2012 R2 instances to make outgoing connections to the
iSCSI network endpoints (169.254.0.2:3260, 169.254.2.0/24:3260) that serve the instance's
boot and block volumes.
l Oracle recommends that you do not reconfigure the firewall on your instance to remove
these rules. Removing these rules allows non-root users or non-administrators to
access the instance’s boot disk volume.
l Oracle recommends that you do not create custom images without these rules unless
you understand the security risks.
System Resilience
Oracle Cloud Infrastructure runs on Oracle's high quality Sun servers. However, any hardware
can experience a failure. Follow industry-wide hardware failure best practices to ensure the
resilience of your solution. Some best practices include:
l Design your system with redundant compute nodes in different availability domains to
support fail-over capability.
l Create a custom image of your system drive each time you change the image.
l Back up your data drives, or sync to spare drives, regularly.
If you experience a hardware failure and have followed these practices, you can terminate the
failed instance, launch your custom image to create a new instance, and then apply the
backup data.
Stopping the DHCP client might remove the host route table when the lease expires. Also, loss
of network connectivity to your iSCSI connections might result in loss of the boot drive.
User Access
If you created your instance using an Oracle-provided Linux image, you can use SSH to access
your instance from a remote host as the opc user. After logging in, you can add users on your
instance.
If you do not want to share SSH keys, you can create additional SSH-enabled users.
If you created your instance using an Oracle-provided Windows image, you can access your
instance using a Remote Desktop client as the opc user. After logging in, you can add users on
your instance.
For more information about user access, see Adding Users on an Instance.
NTP Server
Oracle Cloud Infrastructure offers a fully managed, secure, and highly available NTP server
that you can use to set the date and time of your Compute and Database instances from within
your virtual cloud network (VCN). Oracle recommends that you configure your instances to
use the Oracle Cloud Infrastructure NTP server. For information about how to configure
instances to use this NTP server, see Configuring the Oracle Cloud Infrastructure NTP Server
for an Instance.
You can also choose to configure your instance to use a public NTP server or use FastConnect
to leverage an on-premises NTP server.
1. Configure IPtables to allow connections to the Oracle Cloud Infrastructure NTP server,
using the following commands:
sudo iptables -I BareMetalInstanceServices 8 -d 169.254.169.254/32 -p udp -m udp --dport 123 -m
comment --comment "Allow access to OCI local NTP service" -j ACCEPT
4. Configure the instance to use the Oracle Cloud Infrastructure NTP server for iburst. To
configure, modify the /etc/ntp.conf file as follows:
a. In the server section, comment out the lines specifying the RHEL servers:
#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
5. Set the NTP service to launch automatically when the instance boots with the following
command:
sudo chkconfig ntpd on
7. Confirm that the NTP service is configured correctly with the following command:
ntpq -p
3. Change the firewall rules to allow inbound and outbound traffic with the Oracle Cloud
Infrastructure NTP server, at 169.254.169.254, on UDP port 123 with the following
command:
awk -v n=13 -v s=' <passthrough ipv="ipv4">-A OUTPUT -d 169.254.169.254/32 -p udp -m udp --dport
123 -m comment --comment "Allow access to OCI local NTP service" -j ACCEPT </passthrough>' 'NR ==
n {print s} {print}' /etc/firewalld/direct.xml > tmp && mv tmp /etc/firewalld/direct.xml
At the prompt:
mv: overwrite ‘/etc/firewalld/direct.xml’?
enter y
4. Restart the firewall with the following command:
service firewalld restart
6. Configure the instance to use the Oracle Cloud Infrastructure NTP server for iburst. To
configure, modify the /etc/ntp.conf file as follows:
a. In the server section comment out the lines specifying the RHEL servers:
#server 0.rhel.pool.ntp.org iburst
#server 1.rhel.pool.ntp.org iburst
#server 2.rhel.pool.ntp.org iburst
#server 3.rhel.pool.ntp.org iburst
7. Start and enable the NTP service with the following commands:
systemctl start ntpd
systemctl enable ntpd
You also need disable the chrony NTP client to ensure that the NTP service starts
automatically after a reboot, using the following commands:
systemctl stop chronyd
systemctl disable chronyd
8. Confirm that the NTP service is configured correctly with the following command:
ntpq -p
b. Click Type.
c. Change the value to NTP and click OK.
2. Configure the Windows Time service to enable the Timeserv_Announce_Yes and
Reliable_Timeserv_Announce_Auto flags.
To configure, set the AnnounceFlags parameter to 5:
a. From Registry Editor, navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\
b. Click AnnounceFlags.
c. Change the value to 5 and click OK.
3. Enable the NTP server:
a. From Registry Editor, navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpServer\
b. Click Enabled.
c. Change the value to 1 and click OK.
4. Set the time sources:
a. From Registry Editor, navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\
b. Click NtpServer.
c. Change the value to 169.254.169.254,0x9 and click OK.
5. Set the poll interval:
a. From Registry Editor, navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\
b. Click SpecialPollInterval.
c. Set the value to the interval that you want the time service to synchronize on. The
value is in seconds. To set it for 15 minutes, set the value to 900, and click OK.
6. Set the phase correction limit settings to restrict the time sample boundaries:
a. From Registry Editor, navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config\
b. Click MaxPosPhaseCorrection.
c. Set the value to the maximum time offset in the future for time samples. The
value is in seconds. To set it for 30 minutes, set the value to 1800 and click OK.
d. Click MaxNegPhaseCorrection.
e. Set the value to the maximum time offset in the past for time samples. The value
is in seconds. To set it for 30 minutes, set the value to 1800 and click OK.
7. Restart the time service by running the following command from a command prompt:
net stop w32time && net start w32time
8. Test the connection to the NTP server by running the following command from a
command prompt:
w32tm /query /peers
Peer: 169.254.169.254,0x9
State: Active
Time Remaining: 22.1901786s
Mode: 3 (Client)
Stratum: 0 (unspecified)
PeerPoll Interval: 10 (1024s)
HostPoll Interval: 10 (1024s)
After the time specified in the poll interval has elapsed, State will change from Pending
to Active.
big data, OLTP, and any other workload that can benefit from high-performance block storage.
Note that these devices are not protected in any way; they are individual devices locally
installed on your instance. Oracle Cloud Infrastructure does not take images, back up, or use
RAID or any other methods to protect the data on NVMe devices. It is your responsibility to
protect and manage the durability the data on these devices.
Oracle Cloud Infrastructure offers high-performance remote block (iSCSI) LUNs that are
redundant and can be backed up using an API call. See Overview of Block Volume for more
information.
[opc@somehost ~]$
A protected RAID array is the most recommended way to protect against an NVMe device
failure. There are three RAID levels that can be used for the majority of workloads:
l RAID 1: An exact copy (or mirror) of a set of data on two or more disks; a classic
RAID 1 mirrored pair contains two disks, as shown:
l RAID 10: Stripes data across multiple mirrored pairs. As long as one disk in each
mirrored pair is functional, data can be retrieved, as shown:
l RAID 6: Block-level striping with two parity blocks distributed across all member disks,
as shown.
For more information about RAID and RAID levels, see RAID.
Because the appropriate RAID level is a function of the number of available drives, the
number of individual LUNs needed, the amount of space needed, and the performance
requirements, there isn't one correct choice. You must understand your workload and design
accordingly.
Create a single RAID 10 device across all four devices. This array provides redundancy by
surviving the failure of any single device and possibly surviving the failure of two devices,
depending on which devices fail. It would be exposed as a single device with about 6.4TB of
usable space.
Use the following commands to create a single RAID 10 device across four devices:
$ sudo yum install mdadm -y
Create two physically independent RAID 1 arrays. These arrays are exposed as two different
LUNs to your applications. This is a recommended choice when you need to isolate one type of
I/O from another, such as data files and log files. These arrays can survive the failure of any
single device and might survive the failure of two devices, depending on which devices fail.
The usable space for these arrays is about 6.4TB.
Use the following commands to create two physically independent RAID 1 arrays:
$ sudo yum install mdadm -y
Create a RAID 6 array across all four devices. This array has the same amount of space as the
options previously described (about 6.4TB), but its performance will be slower. In exchange
for the slight reduction in performance, you gain additional durability because the array can
survive the failure of any two devices.
Use the following commands to create a RAID 6 across all four devices:
$ sudo yum install mdadm -y
There are several options for BM.DenseIO1.36 instances with nine NVMe devices:
Create a single RAID 6 device across all nine devices. This array is redundant, performs well,
will survive the failure of any two devices, and will be exposed as a single LUN with about
23.8TB of usable space.
Use the following commands to create a single RAID 6 device across all nine devices:
$ sudo yum install mdadm -y
Create a four device RAID 10 and a five device RAID 6 array. These arrays would be exposed
as two different LUNs to your applications. This is a recommended choice when you need to
isolate one type of I/O from another, such as log and data files. In this example, your RAID 10
array would have about 6.4TB of usable space and the RAID 6 array would have about 9.6TB
of usable space.
Use the following commands to create a four-device RAID 10 and a five-device RAID 6 array:
$ sudo yum install mdadm -y
If you need the best possible performance and can sacrifice some of your available space,
then an eight-device RAID 10 array is an option. Because RAID 10 requires an even number of
devices, the ninth device is left out of the array and serves as a hot spare in case another
device fails. This creates a single LUN with about 12.8 TB of usable space.
The following command adds /dev/nvme8n as a hot spare for the /dev/md0 array:
$ sudo mdadm /dev/md0 --add /dev/nvme8n1
For the best possible performance and I/O isolation across LUNs, create two four-device RAID
10 arrays. Because RAID 10 requires an even number of devices, the ninth device is left out of
the arrays and serves as a global hot spare in case another device in either array fails. This
creates two LUNS, each with about 6.4 TB of usable space.
Use the following commands to create two four-device RAID 10 arrays with a global hot
spare:
$ sudo yum install mdadm -y
1. Add the spare to either array (it does not matter which one) by running these
commands:
$ sudo mdadm /dev/md0 --add /dev/nvme8n1
2. Edit /etc/mdadm to put both arrays in the same spare-group. Add spare-group=global
to the end of the line that starts with ARRAY, as follows:
$ sudo vi /etc/mdadm.conf
It's important for you to be notified if a device in one of your arrays fails. Mdadm has built-in
tools that can be utilized for monitoring, and there are two options you can use:
l Set the MAILADDR option in /etc/mdadm.conf and then run the mdadm monitor as a
daemon
l Run an external script when mdadm detects a failure
S ET THE MAILADDR OPTION IN /ETC/MDADM.CONF AND RUN THE MDADM MONITOR AS A DAEMON
The simplest method is to set the MAILADDR option in /etc/mdadm.conf, and then run the
mdadm monitor as a daemon, as follows:
1. The DEVICE partitions line is required for MAILADDR to work; if it is missing, you must
add it, as follows:
$ sudo vi /etc/mdadm.conf
DEVICE partitions
MAILADDR <[email protected]>
3. To verify that the monitor runs at startup, run the following commands:
$ sudo chmod +x /etc/rc.d/rc.local
$ sudo vi /etc/rc.local
4. To verify that the email and monitor are both working run the following command:
$ sudo mdadm --monitor --scan --test -1
Note that these emails will likely be marked as spam. The PROGRAM option, described
later in this topic, allows for more sophisticated alerting and messaging.
A more advanced option is to create an external script that would run if the mdadm monitor
detects a failure. You would integrate this type of script with your existing monitoring
#!/bin/bash
event=$1
device=$2
if [ $event == "Fail" ]
then
<"do something">
else
if [ $event == "FailSpare" ]
then
<"do something else">
else
if [ $event == "DegradedArray" ]
then
<"do something else else">
else
if [ $event == "TestMessage" ]
then
<"do something else else else">
fi
fi
fi
fi
Next, add the PROGRAM option to /etc/mdadm.conf, as shown in the following example:
1. The DEVICE partitions line is required for MAILADDR to work; if it is missing, you must
add it, as follows:
$ sudo vi /etc/mdadm.conf
DEVICE partitions
MAILADDR <[email protected]>
PROGRAM /etc/mdadm.events
3. To verify that the monitor runs at startup, run the following commands:
$ sudo chmod +x /etc/rc.d/rc.local
$ sudo vi /etc/rc.local
4. To verify that the email and monitor are both working run the following command:
$ sudo mdadm --monitor --scan --test -1
Note that these emails will likely be marked as spam. The PROGRAM option, described
later in this topic, allows for more sophisticated alerting and messaging.
You can use mdadm to manually cause a failure of a device to see whether your RAID array can
survive the failure, as well as test the alerts you have set up.
2. Recover the device or your array might not be protected. Use the following command:
$ sudo mdadm /dev/md0 --add /dev/nvme0n1
Your array will automatically rebuild in order to use the "new" device. Performance will
be decreased during this process.
3. You can monitor the rebuild status by running the following command:
$ sudo mdadm --detail /dev/md0
Compute resources in the cloud are designed to be temporary and fungible. If an NVMe device
fails while the instance is in service, you should start another instance with the same amount
of storage or more, and then copy the data onto the new instance, replacing the old instance.
There are multiple toolsets for copying large amounts of data, with rsync being the most
popular. Since the connectivity between instances is a full 10Gb/sec, copying data should be
quick. Remember that with a failed device, your array may no longer be protected, so you
should copy the data off of the impacted instance as quickly as possible.
The Linux Logical Volume Manager (LVM) provides a rich set of features for managing
volumes. If you need these features, we strongly recommend that you use mdadm as described
in preceding sections of this topic to create the RAID arrays, and then use LVM's pvcreate,
vgcreate, and lvcreate commands to create volumes on the mdadm LUNs. You should not use
LVM directly against your NVMe devices.
Once your data is protected against the loss of a NVMe device, you need to protect it against
the loss of an instance or the loss of the availability domain. This type of protection is typically
done by replicating your data to another availability domain or backing up your data to
another location. The method you choose depends on your objectives. For details, see
Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
REPLICATION
Replicating your data from one instance in one availability domain to another has the lowest
RTO and RPO at a significantly higher cost than backups; for every instance in one availability
domain, you must have another instance in a different availability domain.
For Oracle database workloads, you should use the built-in Oracle Data Guard functionality to
replicate your databases. Oracle Cloud Infrastructure availability domains are each close
enough to each other to support high performance, synchronous replication. Asynchronous
replication is also an option.
For general-purpose block replication, DRBD is the recommended option. You can configure
DRBD to replicate, synchronously or asynchronously, every write in one availability domain to
another availability domain.
BACKUPS
Traditional backups are another way to protect data. All commercial backup products are fully
supported on Oracle Cloud Infrastructure. If you use backups, the RTO and RPO are
significantly higher than using replication because you must recreate the compute resources
that failed and then restore the most recent backup. Costs are significantly lower because you
don't need to maintain a second instance. Do not store your backups in the same availability
domain as their original instance.
The two recommended ways of protecting against data corruption or loss from application or
user error are regularly taking snapshots or creating backups.
S NAPSHOTS
The two easiest ways to maintain snapshots are to either use a file system that supports
snapshots, such as ZFS, or use LVM to create and manage the snapshots. Because of the way
LVM has implemented copy-on-write (COW), performance may significantly decrease when a
snapshot is taken using LVM.
BACKUPS
All commercial backup products are fully supported on Oracle Cloud Infrastructure. Make sure
that your backups are not stored in the same availability domain as their original instance.
Oracle-Provided Images
An image is a template of a virtual hard drive. The image determines the operating system
and other software for an instance. The following table lists the images that are available in
Oracle Cloud Infrastructure. For specific image and kernel version details, along with changes
between versions, see Oracle-Provided Image Release Notes.
You also can create custom images of your boot disk OS and software configuration for
launching new instances.
All Oracle-provided images include rules that allow only "root" on Linux instances or
"Administrators" on Windows Server 2012 R2 instances to make outgoing connections to the
iSCSI network endpoints (169.254.0.2:3260, 169.254.2.0/24:3260) that serve the instance's
boot and block volumes.
l Oracle recommends that you do not reconfigure the firewall on your instance to remove
these rules. Removing these rules allows non-root users or non-administrators to
access the instance’s boot disk volume.
l Oracle recommends that you do not create custom images without these rules unless
you understand the security risks.
The Ubuntu image is preconfigured with suitable repositories to allow you to install, update,
and remove packages.
Oracle Linux images on Oracle Cloud Infrastructure include Oracle Linux Premier Support at
no extra cost. This gives you all the services included with Premier Support including Oracle
Ksplice. Ksplice enables you to apply important security and other critical kernel updates
without a reboot. For more information, see About Oracle Ksplice and Ksplice Overview.
Ksplice is only available for Linux instances launched on or after February 15, 2017. For
instances launched prior to August 25, 2017, you must install it prior to running it. See
Installing and Running Oracle Ksplice for more information.
Ksplice Support
Users
For instances created using Oracle Linux and CentOS images, the user name opc is created
automatically. The opc user has sudo privileges and is configured for remote access over the
SSH v2 protocol using RSA keys. The SSH public keys that you specify while creating
instances are added to the /home/opc/.ssh/authorized_keys file.
For instances created using the Ubuntu image, the user name ubuntu is created
automatically. The ubuntu user has sudo privileges and is configured for remote access over
the SSH v2 protocol using RSA keys. The SSH public keys that you specify while creating
instances are added to the /home/ubuntu/.ssh/authorized_keys file.
Remote Access
Access to the instance is permitted only over the SSH v2 protocol. All other remote access
services are disabled.
Firewall Rules
Instances created using Oracle-provided images have a default set of firewall rules which
allow only SSH access. Instance owners can modify those rules as needed, but must not
restrict link local traffic to address 169.254.0.2 in accordance with the warning at the top of
this page.
Be aware that Networking uses security lists to control packet-level traffic in and out of the
instance. When troubleshooting access to an instance, make sure both the security lists
associated with the instance's subnet and the instance's firewall rules are set correctly.
Cloud-init Compatibility
Instances created using Oracle-provided images are compatible with Cloud-init. When
launching an instance with the Core Services API, you can pass Cloud-init directives with the
metadata parameter. For more information, see LaunchInstance.
Users
For instances created using Oracle-provided Windows images, the user name opc is created
automatically. When you launch an instance using the Windows image, Oracle Cloud
Infrastructure will generate an initial, one-time password that you can retrieve using the
console or API. This password must be changed after you initially log on.
Remote Access
Firewall Rules
Instances created using the Windows image have a default set of firewall rules which allow
Remote Desktop protocol or RDP access on port 3389. Instance owners can modify these rules
as needed, but must not restrict link local traffic to 169.254.169.253 for the instance to
activate with Microsoft Key Management Service (KMS). This is how the instance stays active
and licensed.
Be aware that Networking uses security lists to control packet-level traffic in and out of the
instance. When troubleshooting access to an instance, make sure both the security lists
associated with the instance's subnet and the instance's firewall rules are set correctly.
Oracle Linux
Oracle 6.x
ORACLE -LINUX-6.9-2018.01.20-0
Notes:
l Includes fix for issue where virtual machines (VMs) may have network interface
cards (NICs) with the MTU set to 1500 instead of 9000.
ORACLE -LINUX-6.9-2018.01.11-0
Notes:
ORACLE -LINUX-6.9-2017.12.18-0
Notes:
ORACLE -LINUX-6.9-2017.11.15-0
Notes:
ORACLE -LINUX-6.9-2017.10.25-0
Notes:
ORACLE -LINUX-6.9-2017.09.29-0
Notes:
l Includes a fix to address issue where updated hostname was not persisted across an
instance reboot.
l NetworkManager is disabled by default.
l Configured to use the OCI NTP server by default. See Configuring the Oracle Cloud
Infrastructure NTP Server for an Instance.
l Preview, developer, and developer_epel yum channels are enabled by default.
ORACLE -LINUX-6.9-2017.08.25-0
Notes:
l Includes a fix to address issue where 'hostname -f' was not working with hostname
resolution.
ORACLE -LINUX-6.9-2017.07.17-0
Notes:
ORACLE -LINUX-6.9-2017.06.14-0
Notes:
ORACLE -LINUX-6.9-2017.05.25-0
Notes:
ORACLE -LINUX-6.8-2017.03.02-0
ORACLE -LINUX-6.8-2017.01.09-0
Oracle 7.x
ORACLE -LINUX-7.4-GEN2-GPU-2018.01.20-0
ORACLE -LINUX-7.4-2018.01.20-0
Notes:
l Includes fix for issue where virtual machines (VMs) may have network interface
cards (NICs) with the MTU set to 1500 instead of 9000.
ORACLE -LINUX-7.4-GEN2-GPU-2018.01.10-0
Notes:
l Includes a fix to address a GRUB boot order issue that could cause the image to boot
by default into an older version of the kernel after updates.
l This image supports X7 hosts.
ORACLE -LINUX-7.4-2018.01.10-0
Notes:
ORACLE -LINUX7.4-GEN2-GPU-2017.12.18-0
Notes:
ORACLE -LINUX-7.4-2017.12.18-0
Notes:
ORACLE -LINUX-7.4-GEN2-GPU-2017.11.15-0
Notes:
ORACLE -LINUX-7.4-2017.11.15-0
Notes:
ORACLE -LINUX-7.4-GEN2-GPU-2017.10.25-0
Notes:
ORACLE -LINUX-7.4-2017.10.25-0
Notes:
ORACLE -LINUX-7.4-2017.09.29-0
Notes:
l Includes a fix to address issue where updated hostname was not persisted across an
instance reboot.
l NetworkManager is disabled by default.
l Configured to use the OCI NTP server by default. See Configuring the Oracle Cloud
Infrastructure NTP Server for an Instance.
l Preview, developer, and developer_epel yum channels are enabled by default.
ORACLE -LINUX-7.4-2017.08.25-1
Notes:
ORACLE -LINUX-7.4-2017.08.25-0
Notes:
ORACLE -LINUX-7.3-2017.07.17-1
Notes:
l Consistent image versions between the Phoenix (PHX) and Ashburn (IAD) regions.
ORACLE -LINUX-7.3-2017.07.17-0
Kernel Version:
l PHX: 4.1.12-94.3.6.el7uek.x86_64
l IAD: 4.1.12-94.3.8.el7uek.x86_64
PHX Notes:
l Includes a fix to address issue where 'hostname -f' was not working with hostname
resolution.
IAD Notes:
ORACLE -LINUX-7.3-2017.05.23-0
Notes:
ORACLE -LINUX-7.3-2017.04.18-0
Notes:
ORACLE -LINUX-7.3-2017.03.03-0
CentOS
CentOS 6.x
CENT OS-6.9-2018.01.05-0
Notes:
CENT OS-6.9-2017.09.14-0
Notes:
l Includes a fix to address issue where 'hostname -f' was not working with hostname
resolution.
CENT OS-6.9-2017.07.17-0
Notes:
CENT OS-6.9-2017.06.13-0
Notes:
CENT OS-6.8-2017.02.03-0
CentOS 7.x
CENT OS-7-2018.01.04-0
Notes:
CENT OS-7-2017.10.19-0
Notes:
CENT OS-7-2017.09.14-0
Notes:
l Includes a fix to address issue where 'hostname -f' was not working with hostname
resolution.
CENT OS-7-2017.07.17-0
Notes:
CENT OS-7-2017.04.18-0
Notes:
CENT OS-7-2017.02.02-2
Ubuntu
Ubuntu 14.04
CANONICAL-UBUNTU-14.04-2018.01.11-0
Notes:
CANONICAL-UBUNTU-14.04-2017.11.21-0
Notes:
CANONICAL-UBUNTU-14.04-2017.08.22-0
CANONICAL-UBUNTU-14.04-2017.07.10-0
Ubuntu 16.04
CANONICAL-UBUNTU-16.04-GEN2-GPU-2018.01.11-0
Notes:
CANONICAL-UBUNTU-16.04-2018.01.11-0
Notes:
CANONICAL-UBUNTU-16.04-GEN2-GPU-2017.11.21-0
Notes:
CANONICAL-UBUNTU-16.04-2017.11.21-0
Notes:
CANONICAL-UBUNTU-GEN2-GPU-16.04-2017.10.25-0
Notes:
CANONICAL-UBUNTU-16.04-2017.10.25-0
Notes:
CANONICAL-UBUNTU-16.04-2017.08.22-0
CANONICAL-UBUNTU-16.04-2017.06.28-0
CANONICAL-UBUNTU-16.04-2017.05.16-0
Windows
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
Notes:
This topic describes how to install and configure Ksplice. Ksplice is only available for Oracle
Linux instances launched on or after February 15, 2017. It is installed on instances launched
on or after August 25, 2017, so you just need to run it on these instances to install the
available Ksplice patches. For instances launched prior to August 25, 2017, you must install it
prior to running it.
<public-ip-address> is your instance IP address that you retrieved from the Console,
see Getting the Instance Public IP Address.
2. Run the following SSH commands to sudo to the root:
sudo bash
3. Download the Ksplice installation script with the following SSH command:
wget -N https://www.ksplice.com/uptrack/install-uptrack-oc
4. Once the script is downloaded, use the following SSH command to install Ksplice:
sh install-uptrack-oc
Running Ksplice
To run Ksplice you need to connect to your Linux instance by using a Secure Shell (SSH). See
Connecting to an Instance for more information.
<public-ip-address> is your instance IP address that you retrieved from the Console,
see Getting the Instance Public IP Address.
2. Run the following SSH commands to sudo to the root:
sudo bash
cd
Automatic Updates
You can configure automatic updates by setting the value of autoinstall to yes in
/etc/uptrack/uptrack.conf.
You can create a custom image of a Bare Metal instance's boot disk and use it to launch other
instances. Instances you launch from your image include the customizations, configuration,
and software installed when you created the image.
Custom images do not include the data from any attached block volumes. For information
about backing up volumes, see Backing Up a Volume.
For administrators: The policy in Let Users Launch Instances includes the ability to create and
manage images. If the specified group doesn't need to launch instances or attach volumes,
you could simplify that policy to include only manage instance-family, and remove the
statements involving volume-family and virtual-network-family.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
169.254.0.2, 169.254.2.2-169.254.2.254
For iSCSI connections to the boot and block volumes.
169.254.0.3
For uploads relating to kernel updates. See OS Kernel Updates for more information.
169.254.169.254
For DNS (port 53) and Metadata (port 80) services. See Getting Instance Metadata for
more information.
169.254.169.253
For Windows instances to activate with Microsoft Key Management Service (KMS).
l When you create an image of a running instance, the instance shuts down and remains
unavailable for several minutes. When the process completes, the instance restarts.
l You cannot create a new custom image if that instance is already engaged in the image
creation process. When you start to create a custom image, the system implements a
20 minute timeout, during which you cannot create another image of the same instance.
You can, however, create images of multiple instances at the same time.
l Custom images are available to all users authorized for the compartment in which the
image was created.
l A custom image cannot exceed 50 GB.
l You can create a maximum of 25 custom images per region per root compartment.
l You cannot create an image of an Oracle Database instance.
l Custom images cannot be exported or downloaded.
l If you use a custom image and update the OS kernel on your instance, you must also
upload the update to the network drive. See OS Kernel Updates for more information.
See Bring Your Own Image for information on how to deploy any version of any operating
system that is supported by the Oracle Cloud Infrastructure hardware.
If you do attempt to use an existing X5 image on X7 hardware, note that Ubuntu 14.04 and all
Windows and CentOS versions are not cross-compatible.
Oracle Linux and Ubuntu 16.04 are cross-compatible, however you need to update the kernel
to the most recent version to install the latest device drivers. To do so, run the following
commands from a terminal session:
l Oracle Linux
yum update
l Ubuntu 16.04
apt-get update
apt-get dist-upgrade
The primary device drivers that are different between X5 and X7 hosts are:
Additional updates may be required depending on how you have customized the image.
Limits
To review existing custom images using the Console, click Compute, choose your
Compartment, and then click Custom Images.
l DeleteImage
l GetImage
l ListImages
l CreateImage
l UpdateImage
Windows supports two kinds of images: generalized and specialized. Generalized images are
images that have been cleaned of instance-specific information. Specialized images are point-
in-time snapshots of the boot disk of a running instance, and are useful for creating backups
of an instance. Oracle Cloud Infrastructure supports bare metal and VM instances launched
from both generalized and specialized custom Windows images.
Generalized images
Specialized images
A specialized image has an OS disk that is already fully installed, and is essentially a copy of
the original bare metal or VM instance. Specialized images are intended to be used for
backups so that you can recover from a failure. Specialized images are useful when you are
testing a task and may need to roll back to known good configuration. Specialized images are
not recommended for cloning multiple identical Bare Metal instances or VMs in the same
network because of issues with multiple computers having the same computer name and ID.
When creating a specialized image, you must remember the opc user's password; a new
password is not generated when the instance launches, and it cannot be retrieved from the
console or API.
6. Extract the files to C:\Windows\Panther. The following files are extracted into the
Panther folder for both Windows Server 2008 and Windows Server 2012:
l Generalize.cmd
l Specialize.cmd
l unattend.xml
l Post-Generalize.ps1
For Windows Server 2008, the following file is also extracted into the Panther folder:
l Windows2008-SnapshotUtilities.ps1
7. Optional: If you want to preserve the opc user account, edit C:\Program
Files\oci\imageType.json and change the imageTypesetting to custom. A new
password is not created and the password is not retrievable from the console or API.
If you want to configure the generalized image to recreate the opc user account when a
new instance is launched from the image, leave the imageType setting defaulted to
general. The new account's password can be retrieved through the API using
GetWindowsInstanceInitialCredentials.
8. Right-click Generalize.cmd, and then click Run as administrator. Keep in mind the
following outcomes of running this command:
l Your connection to the Remote Desktop client may immediately be turned off and
you will be logged out of the instance. If this does not occur, you should log out of
the instance yourself.
l Because sysprep generalize turns off Remote Desktop, you won't be able to log
in to the instance again.
l Creating a generalized image essentially destroys the instance's functionality.
You should wait for a few minutes before proceeding to step 9 to ensure the
generalization process has completed.
9. Create the new image using To create a custom image.
10. After you create an image from an instance that has been generalized, Oracle
recommends that you terminate that instance. Although it may appear to be running, it
won't be fully operable.
You create a specialized image the same way you create other custom images. For step-by-
step instructions, see Managing Custom Images.
Image Import/Export
Oracle Cloud Infrastructure Compute enables you to share custom images across tenancies
and regions using image import/export. You can only import custom images that have been
exported from Oracle Cloud Infrastructure, so you first need to export an image before you
can import it.
The following Oracle Cloud Infrastructure operating systems support image import/export:
You can also use image import/export to share custom images from Bring Your Own Image
scenarios across tenancies and regions, so you don't need to re-create the image manually in
each region. You need to go through the steps required to manually create the image in one of
the regions, but once this is done, you can export the image, making it available for import in
additional tenants and regions. The exported image format is QCOW2.
Windows Images
For example:
https://objectstorage.us-phoenix-1.oraclecloud.com/n/MyNamespace/b/MyBucket/o/MyCustomImage.qcow2
Pre-Authenticated Requests
When using import/export across tenants and regions you need to use Pre-Authenticated
Requests. See Using the Console for instructions on creating a pre-authenticated request.
When you go through these instructions, after you click Create Pre-Authenticated
Request, the Pre-Authenticated Request Details dialog opens. You need to make a copy
of the pre-authenticated request URL displayed here, as this is the only time this will be
displayed. This is the Object Storage URL you specify for import/export.
Exporting an Image
You can use the console or API to export images, and the exported images are stored in the
Oracle Cloud Infrastructure Object Storage service. To perform an image export, you need
write access to the Object Storage bucket for the image. For more information, see Overview
of Object Storage and Let Users Write Objects to Object Storage Buckets.
Once you click Export Image, the image status changes to EXPORTING. You can still launch
instances while the image is exporting, but you can't delete the image until the export has
finished. Once the export is complete, the image status changes to AVAILABLE. If the image
status changes to AVAILABLE, but you don't see the exported image in the Object Storage
location you specified, the export failed, and you will need to go through the steps again to
export the image.
Importing an Image
You can use the console or API to import exported images from Object Storage. To import an
image, you need read access to the Object Storage object containing the image. For more
information, see Let Users Download Objects from Object Storage Buckets.
Once you click Import Exported Image, you'll see the imported image in the Custom
Images list for the compartment, with a status of IMPORTING. Once the import completes
successfully, the status will change to AVAILABLE. If the status does not change, or no entry
appears in the Custom Images list, the import failed. If the import failed, make sure you
have read access to the Object Storage object, and that the object contains a supported
image.
If you do attempt to use an existing X5 image on X7 hardware, note that Ubuntu 14.04 and all
Windows and CentOS versions are not cross-compatible.
Oracle Linux and Ubuntu 16.04 are cross-compatible, however you need to update the kernel
to the most recent version to install the latest device drivers. To do so, run the following
commands from a terminal session:
l Oracle Linux
yum update
l Ubuntu 16.04
apt-get update
apt-get dist-upgrade
The primary device drivers that are different between X5 and X7 hosts are:
Additional updates may be required depending on how you have customized the image.
Licensing Requirements
l Oracle-provided images: Oracle provides several pre-built images for Oracle Linux,
Microsoft Windows, Ubuntu and CentOS. For the complete list of images, see Oracle-
Provided Images.
l RHEL 7.4 images: You can build new RHEL 7.4 images for bare metal and VM
instances using a terraform template, see Terraform Provider for RHEL 7.4.
l Importing custom images for emulation mode: You can import existing operating
system images using either the VMDK or QCOW2 formats, to run in emulation mode
VMs. For more information, see Bring Your Own Custom Image for Emulation Mode
Virtual Machines.
l Bring Your Own KVM: You can bring your own operating system images or older
operating systems such as Ubuntu 6.x, RHEL 3.x, CentOS 5.4 using KVM on bare metal
instances. For more information, see Installing and Configuring KVM on Bare Metal
Instances with Multi-VNIC.
l Bring Your Own OVM: You can bring your Oracle VM workload to Oracle Cloud
Infrastructure, for more information, see Installing and Configuring Oracle VM on
Oracle Cloud Infrastructure.
Bring Your Own Custom Image for Emulation Mode Virtual Machines
The Oracle Cloud Infrastructure Compute service enables you to import your older operating
systems to Oracle Cloud Infrastructure. You can import a wide range of new and legacy
production operating systems, using the QCOW2 or VMDK formats, and then run them on
Compute virtual machines (VMs) using emulated hardware. The following table lists the
operating systems that are supported for emulation mode VMs:
Custom images imported for emulation mode VMs must meet the following requirements:
We recommend that you enable certificate-based SSH, however this is optional. If you want
your image to automatically use ssh keys supplied from the User Data field when you launch
an instance, you can install Cloud-Init when preparing the image. See Launching an Instance
for more information about the User Data field.
The following is a high-level outline of the steps required to import custom images for
emulation mode VMs.
1. Prepare the image for import. This includes enabling serial console access for all
custom images and configuring a network interface without a static MAC address and to
support DHCP. For more information, see Preparing the Custom Image for Emulation
Mode.
2. Export the image as VMDK or QCOW2 format using existing virtualization tools. See the
tools documentation for your virtualization environment.
3. Upload the image to Oracle Cloud Infrastructure Object Storage. See Overview of
Object Storage for more information.
4. Import the image. See Importing Custom Images for Emulation Mode.
Oracle Cloud Infrastructure enables you to import a custom image and then use the custom
image to launch virtual machine (VM) instances in emulation mode. Before you can import the
custom image, you need to prepare the custom image to ensure that instances launched from
the custom image can boot correctly and that network connections will work. This topic
describes the steps to prepare the custom image.
The most important step to prepare your custom image for import is to configure it to support
connections using the serial console feature in the Compute service. For more information
about this feature, see Instance Console Connections for troubleshooting steps if your image
has network connectivity issues once launched.
The serial console connection in Oracle Cloud Infrastructure uses the first serial port, ttyS0,
on the VM. The boot loader and the operating system should be configured to use ttyS0 as a
console terminal for both input and output.
The steps to configure the boot loader to use ttyS0 as a console terminal for both input and
output depend on the GRUB version. Run the following command on the operating systemto
determine the GRUB version:
grub install --version
If the version number returned is 2.* use the steps for GRUB 2. For earlier versions, use the
steps for GRUB.
To configure GRUB
1. Run the following command to modify the GRUB configuration file:
sudo vi /boot/grub/grub.conf
To configure GRUB2
1. Run the following command to modify the GRUB configuration file:
sudo vi /etc/default/grub
The operating system may already be configured to use ttyS0 as a console terminal for both
input and output. To verify, run the following command:
sudo vi /etc/securetty
Check the file for ttyS0. If you don't see it, append ttyS0 to the end of the file.
After completing the steps to enable serial console access to the image, you should validate
that serial console access is working by testing the image with serial console in your
virtualization environment. Consult the documentation for your virtualization environment on
how to do this. Verify that the boot output is showing up in the serial console output and that
there is interactive input available once the image has booted.
If no output is displayed on the serial console, verify in the configuration for your
virtualization environment that the serial console device is attached to the first serial port.
If the serial console is displaying output, but there is no interactive input available, check that
there is a terminal process listening on the ttyS0 port. To do this, run the following command:
ps aux | grep ttyS0
This command should output a terminal process that is listening on the ttyS0 port. For
example, if your system is using getty, you will see the following output:
/sbin/getty ttyS0
If you don't see this output, it is likely that a login process is not configured for the serial
console connection. To resolve this, enable the init settings, so that a terminal process is
listening on the ttyS0 at startup.
For example, if your system is using getty, add the following command to the the init settings
to run on system startup:
getty -L 9600 ttyS0 vt102
The steps to do this will vary depending on the operating system, so consult the
documentation for the image's operating system.
NETWORK CONFIGURATION
Once a custom image is imported into Oracle Cloud Infrastructure, all existing NICs are
replaced with a single NIC. You will need to configure the NIC to access to the internet. You
need to ensure that DHCP is enabled on the NIC and that the MAC and IP addresses are not
hardcoded. You can configure the NIC during the image preparation process or after the
image has been imported to Oracle Cloud Infrastructure using serial console access to the
instance. Consult your system documentation on how to perform network configuration for
your system, and see Instance Console Connections.
Once the instance has been launched, you can attach and configure additional VNICs. See
Virtual Network Interface Cards (VNICs) for more information.
Oracle Cloud Infrastructure enables you to import a custom image and then use the custom
image to launch virtual machine (VM) instances in emulation mode. For more information
about supported images and image requirements, see Bring Your Own Custom Image for
Emulation Mode Virtual Machines. This topic walks through the custom image import process.
You need to prepare the image prior to importing, see Preparing the Custom Image for
Emulation Mode for more information.
Once you click Import Image, you'll see the imported image in the Custom Images list for
the compartment, with a status of IMPORTING. Once the import completes successfully, the
status will change to AVAILABLE. If the status does not change, or no entry appears in the
Custom Images list, the import failed. If the import failed, make sure you have read access
to the Object Storage object, and that the object contains a supported image.
NEXT S TEPS
1. Launch an instance based on the custom image. See Launching an Instance for more
information. To select a custom image, choose CUSTOM IMAGE as the Image
Source. You can then select the imported custom image from the Images list.
2. Create a serial console connection to the instance. See Instance Console Connections
for more information.
See Compute Known Issues for current issues and workarounds for imported custom images.
OS Kernel Updates
Note
Oracle Cloud Infrastructure boots each instance from a network drive. This configuration
requires additional actions when you update the OS kernel.
Oracle Cloud Infrastructure uses Unified Extensible Firmware Interface (UEFI) firmware and a
Preboot eXecution Environment (PXE) interface on the host server to load iPXE from a Trivial
File Transfer Protocol (TFTP) server. The iPXE implementation runs a script to boot Oracle
Linux. During the boot process, the system downloads the kernel, the initrd file, and the
kernel boot parameters from the network. The instance does not use the host's GRUB boot
loader.
Normally, the yum update kernel-uek command edits the GRUB configuration file, either
grub.cfg or grub.conf, to configure the next boot. Since bare metal instances do not use the
GRUB boot loader, changes to the GRUB configuration file are not implemented. When you
update the kernel on your instance, you also must upload the update to the network to ensure
a successful boot process. The following approaches address this need:
l Instances launched from an Oracle-provided image include an Oracle yum plug-in that
seamlessly handles the upload when you run the yum update kernel-uek command.
l If you use a custom image based on an Oracle-provided image, the included yum plug-
in will continue to work, barring extraordinary changes.
l If you install your own package manager, you must either write your own plug-in or
upload the kernel, initrd, and kernel boot parameters manually.
/usr/share/yum-plugins/kernel-update-handler.py
/etc/yum/pluginconf.d
The plug-in looks for two variables in the /etc/sysconfig/kernel file, UPDATEDEFAULT
and DEFAULTKERNEL . It picks up the updates only when the first variable is set to "yes"
and the DEFAULTKERNEL value matches the kernel being updated. For example:
# UPDATEDEFAULT specifies if new-kernel-pkg should make
# new kernels the default
UPDATEDEFAULT=yes
Oracle-provided images incorporate the Unbreakable Enterprise Kernel (UEK). If you want to
switch to a non-UEK kernel, you must update the DEFAULTKERNEL value to "kernel" before
you run yum update kernel.
Manual Updates
Oracle recommends using the Oracle yum plug-in to update
the kernel.
If you manually upload the updates, there are four relevant URLs:
http://169.254.0.3/kernel
http://169.254.0.3/initrd
http://169.254.0.3/cmdline
http://169.254.0.3/activate
The first three URLs are for uploading files (HTTP request type PUT). The fourth URL is for
activating the uploaded files (HTTP request type POST). The system discards the uploaded
files if they are not activated before the host restarts.
The kernel and initrd are simple file uploads. The cmdline upload must contain the kernel boot
parameters found in the grub.cfg or grub.conf file, depending on the Linux version. The
following example is an entry from the /boot/efi/EFI/redhat/grub.cfg file in Red Hat
Linux 7. The highlighted text represents the parameters to upload.
kernel /boot/vmlinuz-4.1.12-37.5.1.el6uek.x86_64 ro root=UUID=8079e287-53d7-4b3d-b708-c519cf6829c8 rd_
NO_LUKS KEYBOARDTYPE=pc KEYTABLE=us netroot=iscsi:@169.254.0.2::3260:iface1:eth0::iqn.2015-
02.oracle.boot:uefi rd_NO_MD SYSFONT=latarcyrheb-sun16 ifname=eth0:90:e2:ba:a2:e3:80 crashkernel=auto
iscsi_initiator=iqn.2015-02. rd_NO_LVM ip=eth0:dhcp rd_NO_DM LANG=en_US.UTF-8 console=tty0
console=ttyS0,9600 iommu=on
The following command returns what is being uploaded to the cmdline file.
cat /tmp/cmdline
The following commands update the cmdline and initrd files, and then activate the changes.
CKSUM=`md5sum /tmp/cmdline | cut -d ' ' -f 1`
consists of a private key and public key. You keep the private key on your computer and
provide the public key every time you launch an instance.
When you connect to an instance using SSH, you provide the path to the key pair file in the
SSH command. You can have as many key pairs as you want, or you can keep it simple and
use one key pair for all or several of your instances.
To create key pairs, you can use a third-party tool such as OpenSSH on UNIX-style systems
(including Linux, Solaris, BSD, and OS X) or PuTTY Key Generator on Windows.
Prerequisites
If you're using a UNIX-style system, you probably already have the ssh-keygen utility
installed. To determine if it's installed, type ssh-keygen on the command line. If it's not
installed, you can download OpenSSH for UNIX from http://www.openssh.com/portable.html
and install it.
If you're using a Windows operating system you will need PuTTY and the PuTTY Key
Generator. Download PuTTY and PuTTYgen from http://www.putty.org and install them.
Argument Description
-b 2048 Generate a 2048-bit key. You don't have to set this if 2048 is
acceptable, as 2048 is the default.
-f <path/root_name> The location where the key pair will be saved and the root
name for the files.
2. Accept the default key type of SSH-2 RSA and set the Number of bits in a
generated key to 2048 if it is not already set.
3. Click Generate.
4. Move your mouse around the blank area to generate random data in the key, as shown
below.
(The red line in the following image is for illustration purposes only. It doesn't appear in
the generator pane as you move the mouse.)
5. The generated key appears under Public key for pasting into OpenSSH
authorized_keys file.
6. The Key comment is generated for you, including the date and time stamp. You can
keep the generated key comment or overtype it with your own more descriptive
comment.
7. Leave the Key passphrase field blank.
8. Click Save private key and then click Yes in the prompt about saving the key without
a passphrase.
The key pair is saved in the PuTTY Private Key (PPK) format, which is a proprietary
format that works only with the PuTTY tool set.
You can call the key anything you want, but use the ppk file extension, for example,
mykey.ppk.
9. Select all of the generated key that appears under Public key for pasting into
OpenSSH authorized_keys file, copy it using Ctrl + C, paste it into a text file, and
then save the file in the same location as the private key.
(Do not use Save public key because it does not save the key in the OpenSSH format.)
You can call the key anything you want, but for consistency, use the same name as the
private key and a file extension of pub, for example, mykey.pub.
10. Write down the names and location of your public and private key files. You will need
the public key when launching an instance. You will need the private key to access the
instance via SSH.
Now that you have a key pair, you're ready to launch instances as described in Launching an
Instance.
Launching an Instance
You can launch an instance using the Console or API. When you launch an instance, it is
automatically attached to a Virtual Network Interface Card (VNIC) in the cloud network's
subnet and given a private IP address from the subnet's CIDR. You can either let the address
be automatically assigned, or specify a particular address of your choice. The private
IP address lets instances within the cloud network communicate with each other. They can
instead use fully qualified domain names (FQDNs) if you've set up the cloud network for DNS
(see DNS in Your Virtual Cloud Network).
You have the option to assign the instance a public IP address if the subnet is public. A public
IP address is required to communicate with the instance over the Internet, and to establish a
Secure Shell (SSH) or RDP connection to the instance from outside the cloud network. For
more information, see Internet Access.
For administrators: The simplest policy to enable users to launch instances is listed in Let
Users Launch Instances. It gives the specified group general access to managing instances
and images, along with the required level of access to attach existing block volumes to the
instances. If the group needs to create block volumes, they'll need the ability to manage block
volumes (see Let Volume Admins Manage Block Volumes and Backups).
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
Prerequisites
l A cloud network to launch the instance into. For information about setting up cloud
networks, see Overview of Networking.
l The public key, in OpenSSH format, from the key pair that you plan to use for
connecting to the instance via SSH. The following sample public key is abbreviated for
readability:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/Hc26biw3TXWGEakrK1OQ== rsa-key-20160304
For information about generating a key pair, see Managing Key Pairs on Linux
Instances.
1. Open the Console, click Compute, choose a Compartment you have permission to
work in, and then click Launch Instance.
2. In the Launch Instance dialog box, you specify the resources to use for your instance.
By default, your instance launches in, and the resources you choose come from, your
current compartment. Click the click here link in the dialog box if you want to enable
compartment selection for the instance's image, cloud network, or subnet resources.
You can also choose a different compartment in which to launch your instance.
In the Launch Instance dialog box, you can specify the following:
l Launch in Compartment: The compartment in which you want to launch the
instance.
l Name: The name for the instance. You can add or change the name later. The
name doesn't need to be unique; an Oracle Cloud Identifier (OCID) uniquely
identifies the instance.
l Availability Domain: The availability domain in which you want to run the
instance.
l Image Compartment: The compartment from which to select the image.
l Image Source: The source of the image to use for booting the instance. The
following options are available:
o ORACLE-PROVIDED OS IMAGE: If you select this option, the IMAGE
OPERATING SYSTEM drop-down appears, and you can select one of the
images that are available in Oracle Cloud Infrastructure. See Oracle-
Provided Images for a list of these images.
o CUSTOM IMAGE: If you select this option, the IMAGE drop-down appears,
listing all of the custom images available in the current or selected
compartment. See Managing Custom Images for more information about
custom images.
o BOOT VOLUME: Select this option to launch an instance based on an
existing boot volume. If you select this option, the BOOT VOLUME drop-
down appears, listing all of the available boot volumes in the current or
selected compartment. You can only launch an instance using a boot volume
where the associated instance has been terminated. See Boot Volumes for
more information.
o IMAGE OCID: Select this option to specify the image OCID for a specific
image to use to launch the instance. See Oracle-Provided Image Release
Notes to determine the image OCID for Oracle-provided images.
l Shape Type: Select VIRTUAL MACHINE or BARE METAL MACHINE.
l Shape: A template that determines the number of CPUs, amount of memory, and
other resources allocated to a newly created instance. The SHAPE drop-down is
populated with the list of available VM or bare metal shapes based on what you
selected for shape type. Incompatible shapes in the list are grayed out and you
will not be able to select them. The following table lists the available bare metal
and VM shapes:
Bare Metal Shapes
VM Shapes
VMs are an option that provides flexibility in compute power, memory capability,
and network resources for lighter applications. You can use Block Volume to add
network-attached block storage as needed.
X7 Shapes Availability
After the instance is provisioned, details about it appear in the instance list. To view additional
details, including IP addresses, click the instance name.
When the instance is fully provisioned and running, you can connect to it using SSH as
described in Connecting to an Instance.
You also can attach a volume to the instance, provided the volume is in the same availability
domain.
Prerequisites
l A cloud network to launch the instance into. For information about setting up cloud
networks, see Overview of Networking.
l A security list that enables Remote Desktop Protocol (RDP) access so you can connect to
your instance. Specifically, you need a stateful ingress rule for TCP traffic on destination
port 3389 from source 0.0.0.0/0 and any source port. For more information, see
Security Lists.
To enable RDP access
1. In the Console, click Networking, and then click Virtual Cloud Networks.
2. Click the cloud network you're interested in.
3. Click Security Lists, and then click the security list you're interested in.
4. Click Edit Rules.
5. Under Allow Rules for Ingress, click Add Rule.
6. Enter the following for your new rule:
a. Source CIDR: 0.0.0.0/0
b. IP Protocol: TCP
c. Source Port Range: All
d. Destination Port Range: 3389
7. When done, click Save Security List Rules.
The following screenshot shows the settings for the new rule:
1. Open the Console, click Compute, choose a Compartment you have permission to
work in, and then click Launch Instance.
2. In the Launch Instance dialog box, you specify the resources to use for your instance.
By default, your instance launches in, and the resources you choose come from, your
current compartment. Click the click here link in the dialog box if you want to enable
compartment selection for the instance's image, cloud network, or subnet resources.
You can also choose a different compartment in which to launch your instance.
In the Launch Instance dialog box, you can specify the following:
l Launch in Compartment: The compartment in which you want to launch the
instance.
l Name: The name for the instance. You can add or change the name later. The
name doesn't need to be unique; an Oracle Cloud Identifier (OCID) uniquely
identifies the instance.
l Availability Domain: The availability domain in which you want to run the
instance.
l Image Compartment: The compartment from which to select the image.
l Image Source: The source of the image to use for booting the instance. The
following options are available:
o ORACLE-PROVIDED OS IMAGE: If you select this option, the IMAGE
OPERATING SYSTEM drop-down appears, and you can select one of the
images that are available in Oracle Cloud Infrastructure. See Oracle-
Provided Images for a list of these images.
o CUSTOM IMAGE: If you select this option, the IMAGE drop-down appears,
listing all of the custom images available in the current or selected
compartment. See Managing Custom Images for more information about
custom images.
o BOOT VOLUME: Select this option to launch an instance based on an
existing boot volume. If you select this option, the BOOT VOLUME drop-
down appears, listing all of the available boot volumes in the current or
selected compartment. You can only launch an instance using a boot volume
where the associated instance has been terminated. See Boot Volumes for
more information.
o IMAGE OCID: Select this option to specify the image OCID for a specific
image to use to launch the instance. See Oracle-Provided Image Release
Notes to determine the image OCID for Oracle-provided images
l Shape Type: Select VIRTUAL MACHINE or BARE METAL MACHINE.
l Shape: A template that determines the number of CPUs, amount of memory, and
other resources allocated to a newly created instance.The SHAPE drop-down is
populated with the list of available VM or bare metal shapes based on what you
selected for shape type. Incompatible shapes in the list are grayed out and you
will not be able to select them. The following table lists the available bare metal
and VM shapes:
Bare Metal Shapes
VM Shapes
VMs are an option that provides flexibility in compute power, memory capability,
and network resources for lighter applications. You can use Block Volume to add
network-attached block storage as needed.
X7 Shapes Availability
l ListInstances
l LaunchInstance
l GetInstance
l UpdateInstance
l TerminateInstance
l GetWindowsInstanceInitialCredentials
Connecting to an Instance
You can connect to a running instance by using a Secure Shell (SSH) or Remote Desktop
connection. Most UNIX-style systems include an SSH client by default. To connect to a Linux
instance from a Windows system, you can download a free SSH client called PuTTY from
http://www.putty.org.
For administrators: Here's a more restrictive policy that lets the specified group get the IP
address of existing instances and use power actions on the instances (e.g., stop, start, etc.),
but not launch or terminate instances. The policy assumes the instances and the cloud
network are together in a single compartment (XYZ):
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
Prerequisites
You'll need the following information to connect to the instance:
l For Linux Instances: The full path to the key pair that you used when you launched the
instance. For information about generating key pairs, see Managing Key Pairs on Linux
Instances.
l The default user name for the instance. If you used an Oracle-provided Linux, CentOS or
Windows image to launch the instance, the user name is opc. If you used the Ubuntu
image to launch the instance, the user name is ubuntu.
l The public IP address of the instance. You can get the address from the list of instances
in the Console. Click Compute, choose your Compartment, and then find your
instance in the list. Alternatively, you can use the Core Services API
ListVnicAttachments and GetVnic operations.
<private_key> is the full path and name of the file that contains the private key
associated with the instance you want to access.
<private_key> is the full path and name of the file that contains the private key
associated with the instance you want to access.
<username> is the default name for the instance. For Oracle Linux and CentOS images,
the default user name is opc. For the Ubuntu image, the default name is ubuntu.
<public-ip-address> is your instance IP address that you retrieved from the Console.
To enable Remote Desktop Protocol (RDP) access to the Windows instance, you need to add a
stateful ingress rule for TCP traffic on destination port 3389 from source 0.0.0.0/0 and any
source port.
The following screenshot shows the settings for the new rule:
6. If you are connecting to the instance for the first time, enter the initial password that
was provided to you by Oracle Cloud Infrastructurewhen you launched the instance. You
will be prompted to change the password as soon as you log in. Your new password
must be at least 12 characters long and must comply with Microsoft's password policy.
Otherwise, enter the password you created. If you are using a custom image, you may
need to know the password for the instance the image was created from. For details
about Windows custom images, see Creating Windows Custom Images.
7. Press Enter.
3. In the Resources section on the Instance Details page, click Console Connections,
and then click Create Console Connection.
4. Specify the public key portion for the SSH key, either by browsing and selecting a public
key file, for example id_rsa.pub, or by pasting your public key into the text box, and
then click Create Console Connection.
Once the console connection has been created and is available, the status changes to ACTIVE.
You connect to the serial console by using an SSH client. Mac OS X and most Linux
distributions by default include the SSH client OpenSSH.
Windows does not include an SSH client by default, so you need to install one. You can use
PuTTY, or there are options that include a version of OpenSSH such as:
Note that the steps to connect to the serial console from the PuTTY client will be different than
the steps for OpenSSH.
2. Click the Actions icon ( ), and then click Connect with SSH.
3. If you are using PuTTY, select WINDOWS for PLATFORM.
If you are using OpenSSH select LINUX/MAC OS for PLATFORM.
4. Click Copy to copy the string to the clipboard.
5. Paste the connection string copied from the previous step to PuTTY or your OpenSSH
client and hit enter to connect to the console.
6. Hit enter again to activate the console.
Once you have created the console connection for the instance, you need to set up a secure
tunnel to the VNC server on the instance, and then you can connect with a VNC client.
To set up a secure tunnel to the VNC server on the instance using OpenSSH on
Mac OS X or Linux
1. In the Console, on the Instances Details page, in the Resources section, click
Console Connections.
2. Click the Actions icon ( ), and then click Connect with VNC.
3. Select LINUX/MAC OS for PLATFORM.
4. Click Copy to copy the string to the clipboard.
5. Paste the connection string copied from the previous step to a terminal window on a Mac
OS X or Linux system, and hit enter to set up the secure connection.
6. Once the connection has been established, open your VNC client and specify localhost
as the host to connect to and 5900 as the port to use.
To set up a secure tunnel to the VNC server on the instance using PowerShell
on Windows
1. In the Console, on the Instances Details page, in the Resources section, click
Console Connections.
2. Click the Actions icon ( ), and then click Connect with VNC.
3. Select WINDOWS for PLATFORM.
4. Click Copy to copy the string to the clipboard.
5. Paste the connection string copied from the previous step to Windows Powershell and hit
enter to set up the secure connection.
6. Once the connection has been established, open your VNC client and specify localhost
When you connect, you may see a warning from the VNC
client that the connection is not encrypted. Since you are
connecting through SSH, the connection is secure, so this is
not an issue.
Both of these tasks require you to boot into a bash shell, in maintenance mode.
6. Reboot the instance from the terminal window by entering the keyboard shortcut
CTRL+x.
Once the instance has rebooted, you'll see the Bash shell command-line prompt, and you can
proceed with either of the following procedures.
2. Run the following command to remount the root partition with read/write permissions:
/bin/mount -o remount, rw /
2. Run the following command to remount the root partition with read/write permissions:
/bin/mount -o remount, rw /
3. From the Bash shell, run the following command to change to the SSH key directory for
the opc user:
cd ~opc/.ssh
4. Rename the existing authorized keys file with the following command:
mv authorized_keys authorized_keys.old
5. Replace the contents of the public key file with the new public key file with the following
command:
echo '<contents of .pub key file>' >> authorized_keys
Once you are finished using the console connection, delete the connection for the instance.
If you created your instance using an Oracle-provided Windows image, you can create new
users after you log on to the instance through a Remote Desktop client.
The new users then can SSH to the instance using the appropriate private keys.
1. Generate an SSH key pair for the new user. See Managing Key Pairs on Linux Instances.
2. Copy the public key value to a text file for use later in this procedure.
3. Log in to your instance. See Connecting to an Instance.
4. Become the root user:
sudo su
7. Copy the SSH public key that you saved to a text file into the /home/new_
user/.ssh/authorized_keys file:
echo <public_key> > /home/<new_user>/.ssh/authorized_keys
8. Change the owner and group of the /home/username/.ssh directory to the new user:
chown -R <new_user>:<group> /home/<new_user>/.ssh
9. To enable sudo privileges for the new user, run the visudo command and edit the
/etc/sudoers file as follows:
a. In /etc/sudoers, look for:
%<username> ALL=(ALL) NOPASSWD: ALL
For administrators: The policy in Let Users Launch Instances includes the ability to manage
console history data. If the specified group doesn't need to launch instances or attach
volumes, you could simplify that policy to include only manage instance-family, and
remove the statements involving volume-family and virtual-network-family.
l CaptureConsoleHistory
l DeleteConsoleHistory
l GetConsoleHistory
l GetConsoleHistoryContent
l ListConsoleHistories
You can find some of this information in the Console on the Compute page, or you can get all
of it by logging in to the instance and using the metadata service. The service runs on every
instance and is an HTTP endpoint listening on 169.254.169.254.
For administrators: Users can also get instance metadata through the Compute API (e.g., with
GetInstance). The policy in Let Users Launch Instances covers that ability. If the specified
group doesn't need to launch instances or attach volumes, you could simplify that policy to
include only manage instance-family, and remove the statements involving volume-family
and virtual-network-family.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
curl http://169.254.169.254/opc/v1/instance/metadata/
curl http://169.254.169.254/opc/v1/instance/metadata/<key-name>
"state" : "RUNNING",
"image" : null,
"metadata": {
"ssh_authorized_keys" : "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ06fccNTQfq+JNL
xubFlJ5ZR3kt+uzspdH9tXL+lAejSM1NXPMZKQEo73rKpUUkN121BqL46yTI2JjfFwjJWMJlTkFo
M+CFZev7MIxfEjas06y80ZBZ7DUTQO0GxJPeD8NCOb1VorF8M4xuLwrmzRtkoZzU16umt4y1W0Q4
ifdp3IiiU0U8/WxczSXcUVZOLqkz5dc6oMHdMVpkimietWzGZ4LBBsH/LjEVY7E0V+a0sNchlVDI
Zcm7ErReBLcdTGDq0uLBiuChyl6RUkX1PNhusquTGwK7zc8OBXkRuubn5UKXhI3Ul9Nyk4XESkVW
IGNKmw8mSpoJSjR8P9ZjRmcZVo8S+x4KVPMZKQEo73rKpUUkN121BqL46yTI2JjfFwjJWMJlTkFo
EjRVJ/jf4IythUnkW5RA/2mgu79kbwqPM90J8pRKyjWehl8VsN5wUY+mZQ3jzIfeC9qNKjn5e976
4DFhvw35JHh5sHyzyLVuyLe3teZ6kRUKyQqA9Oy4DMctmbD1uDAz73tWbvdz7SmfWJ5QZ7/Aod0a
Gce6FJS/srWfB+7f/+SX+fu926LqlJa/3r/AGS4IvDfEKvtZCWgTPVrEHVSTuEiDzG9yBuJLZ95J
2AMmeKhnFOKpAsoWVN5kV55RmmQVJaozQHr7V+FaGc8IHDs95vgz4F3AJTi+xl3cvr+6vlfDJNse
c/jRx/+XZynrpltFGvTAUaqXJFowow== [email protected]",
"user_data" : "SWYgeW91IGNhbiBzZWUgdGhpcywgdGhlbiBZdCB3b3JrZWQgbWF5YmUuCg=="
}
l http://169.254.169.254/opc/v1/instance/
l http://169.254.169.254/opc/v1/instance/metadata/
l http://169.254.169.254/opc/v1/instance/metadata/<key-name>
In the example <key-name>, is user_data or any custom key name that you provided when
you launched the instance.
Renaming an Instance
You can rename an instance without changing its Oracle Cloud Identifier (OCID).
For administrators: The policy in Let Users Launch Instances includes the ability to rename an
instance. If the specified group doesn't need to launch instances or attach volumes, you could
simplify that policy to include only manage instance-family, and remove the statements
involving volume-family and virtual-network-family.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
l You should never shut down a VM instance from within the instance (for example,
running shutdown -h now while logged in to the instance). You will not be able to
restart it, even using the API or the Console.
l The stop and restart actions that you perform within the instance are not reflected in the
instance state in the API or Console.
For administrators: The policy in Let Users Launch Instances includes the ability to stop or
start an existing instance. If the specified group doesn't need to launch instances or attach
volumes, you could simplify that policy to include only manage instance-family, and
remove the statements involving volume-family and virtual-network-family.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
2. In the list of instances, find the instance you want to stop or start, and then click the
instance name to display the instance details.
3. Click one of the following actions:
START
Restarts a stopped instance. After the instance is restarted, the Stop action is
enabled.
STOP
Shuts down the instance. After the instance is powered off, the Start action is
enabled.
REBOOT
Resource Billing
Terminating an Instance
You can permanently terminate (delete) instances that you no longer need. Any attached
VNICs and volumes are automatically detached when the instance terminates. Eventually, the
instance's public and private IP addresses are released and become available for other
instances. By default, the instance's boot volume is deleted when you terminate the instance,
however you can preserve the boot volume associated with the instance, so that you can
attach it to a different instance as a data volume, or use it to launch a new instance.
For administrators: The policy in Let Users Launch Instances includes the ability to terminate
an instance (with or without an attached block volume).
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for instances, cloud networks, or other Core Services API
resources, see Details for the Core Services.
3. Click the highlighted name of the instance to display the instance details.
4. Click Terminate, and then respond to the confirmation prompt.
If you want to preserve the boot volume associated with the instance, uncheck
Permanently delete the attached Boot Volume.
Terminated instances temporarily remain in the list of instances with the status
Terminated.
Compute Performance
The content in the sections below apply to Category 7 and Section 3.a of the Oracle PaaS
and IaaS Public Cloud Services Pillar documentation.
Oracle Cloud Infrastructure provides a variety of instance configurations in both bare metal
and virtual machine (VM) shapes. Each shape varies on multiple dimensions including
memory, CPU cores, network bandwidth, and the option of local NVMe SSD storage found in
DenseIO shapes.
Oracle Cloud Infrastructure provides a service-level agreement (SLA) for NVMe performance.
Measuring performance is complex and open to variability.
A NVMe drive also has non-uniform drive performance over the period of drive usage. A NVMe
drive performs differently when tested brand new compared to when tested in a steady-state
after some duration of usage. New drives have not incurred many write/erase cycles and the
inline garbage collection has not had a significant impact on IOPS performance. To achieve
the goal of reproducibility and reduced variability, our testing focuses on the steady-state
duration of the NVMe drive’s operation.
Testing Methodology
Before running any tests, protect your data by making a
backup of your data and operating system environment to
prevent any data loss. The tests described in this document
will overwrite the data on the disk, and cause data
corruption.
Summary: To capture the IOPS measure, first provision a shape such as the new
BM.DenseIO2.52, and then use the Gartner Cloud Harmony test suite to run tests on an
instance running the latest supported Oracle Linux image for each NVMe drive target.
Instructions:
1.
Launch an instance based on the latest supported Oracle Linux image and select a shape
such as the new BM.DenseIO2.52. For launch instructions, see Launching an Instance.
2. Run the Gartner Cloud Harmony test suite tests on the instance for each NVMe drive
target. The following is an example of a command that will work for all shapes and
drives on the shape:
sudo ./run.sh `ls /dev/nvme[0-9]n1 | sed -e 's/\//\--target=\//'`
--nopurge –noprecondition --fio_direct=1 --fio_size=10g --test=iops
--skip_blocksize=512b --skip_blocksize=8k --skip_blocksize=16k
--skip_blocksize=32k --skip_blocksize=64k --skip_blocksize=128k
--skip_blocksize=1m
The SLA for NVMe drive performance is measured against 4k block sizes with 100% random
write workload on DenseIO shapes where the drive is in a steady-state of operation.
Performance Benchmarks
The following table lists the minimum IOPS for the specified shape to meet the SLA, given the
testing methodology with 4k block sizes for 100% random write tests using the tests
described in the previous section.
VM.DenseIO1.4 200k
VM.DenseIO1.8 250k
VM.DenseIO1.16 400k
BM.DenseIO1.36 2.5MM
VM.DenseIO2.8 250k
VM.DenseIO2.16 400k
VM.DenseIO2.24 800k
BM.DenseIO2.52 3.0MM
While the NVMe drives are capable of higher IOPS, Oracle Cloud Infrastructure currently
guarantee this minimum level of IOPS performance.
A: We test hosts on a regular basis to ensure that are our low-level software updates do not
regress performance. In the event you have reproduced the testing methodology and your
drive’s performance does not meet the terms in the SLA please contact your Oracle sales
team.
Q: Why does the testing methodology not represent a diversity of IO workloads such as
random reads and writes to reflect real world IO?
A: We will make changes to provide greater customer value through better guarantees and
improved reproducibility.
TRANSFER DEVICE
A transfer device is your HDD that is specially prepared to copy and upload data to Oracle
Cloud Infrastructure. You copy your data onto one or more of these devices and ship them in
a parcel to Oracle to upload. The HDD can be an internal SATA II/III 3.5" drive.
TRANSFER PACKAGE
A transfer package is the logical representation of the parcel containing the HDD transfer
devices that you ship to Oracle to upload to Oracle Cloud Infrastructure.
TRANSFER JOB
HOST
The computer at your site on which you download the Data Transfer Utility to prepare your
transfer devices.
BUCKET
The logical container in Oracle Cloud Infrastructure Object Storage where Oracle
operators will upload your data. A bucket is associated with a single compartment in your
tenancy that has policies that determine what actions a user can perform on a bucket and
on all the objects in the bucket.
A new or existing IAM user that has the authorization and permissions to create and
manage transfer jobs. See Authentication and Authorization later in this topic.
A temporary IAM user that grants Oracle personnel the authorization and permissions to
upload the data from your transfer devices to your designated Oracle Cloud Infrastructure
Object Storage bucket. You should delete this temporary user once you data is uploaded
to Oracle Cloud Infrastructure. See Authentication and Authorization later in this topic.
1. Generate a manifest for each transfer device using the Data Transfer Utility.
2. Generate the "dry run" report for each transfer device using the Data Transfer Utility.
3. Lock each transfer device using the Data Transfer Utility.
1. Create one or more transfer packages using the Console or the Data Transfer Utility.
2. Attach the transfer devices to the transfer packages using the Console or the Data
Transfer Utility.
3. Get the shipping address for the transfer packages using the Console or the Data
Transfer Utility.
4. Package the transfer devices into a box, and ship the box using an approved shipping
vendor.
l The Data Transfer Utility uses the standard Linux dm-crypt and LUKS utilities to encrypt
block devices.
l The dm-crypt software generates a master AES-256 bit encryption key that it uses for
all data written to or read from the device. That key is protected by an encryption
passphrase that the user must know to access the encrypted data.
l When the data transfer administrator uses the Data Transfer Utility to create devices,
Oracle Cloud Infrastructure creates a very strong encryption passphrase that is
displayed to the user and passed to dm-crypt. The passphrase is displayed to standard
output only once and cannot be retrieved again. Copy this passphrase to a durable,
secure location for future reference.
l All network communication between the Data Transfer Utility and Oracle Cloud
Infrastructure is encrypted in-transit using Transport Layer Security (TLS).
l After copying your data to the transfer devices, you generate a manifest file for each
device using the Data Transfer Utility. The manifest contains an index of all of the
copied files, as well as generated data integrity hashes. The Data Transfer Utility also
encrypts and copies the config_upload_user configuration file to each transfer device.
This configuration file describes the temporary IAM data transfer upload user you create
in step 5 of the "Performing prerequisite tasks in preparation for transfer data" section.
Oracle uses the credentials and entries defined in the config_upload_user file when
processing the transfer device to upload files from the HDD to Oracle Cloud
Infrastructure Object Storage. Oracle cannot upload data from the transfer devices
without the correct credentials defined in this configuration file. See Data Transfer
Utility for detailed information about the required configuration files.
l After copying your data to a transfer device, you need to generate a manifest file using
the Data Transfer Utility. The manifest contains an index of all of the copied files, as
well as generated data integrity hashes. The Data Transfer Utility also encrypts and
copies the config_upload_user configuration file to the transfer device. This
configuration file describes the temporary IAMdata transfer upload user you create in
step 5 of the "Performing prerequisite tasks in preparation for transfer data" section.
Oracle uses the credentials and entries defined in the config_upload_user file when
processing the transfer device to upload files from the HDD to Oracle Cloud
Infrastructure Object Storage. Oracle cannot upload data from the transfer devices
without the correct credentials defined in this configuration file. See Data Transfer
Utility for more information about the required configuration files.
l When you disconnect or lock a transfer device using the Data Transfer Utility, the
original encryption passphrase is required to once again access the device. If the
encryption passphrase is not known or lost, you cannot access the data on the transfer
device. To reuse a transfer device, you must reformat the device and any data on that
device will be lost.
l Oracle retrieves the encryption passphrase for a transfer device from Oracle Cloud
Infrastructure. Oracle uses the passphrase to decrypt, mount the transfer device, and
upload the data to the designated bucket in the tenancy.
l After processing a transfer package, Oracle returns all transfer devices attached to the
transfer package using the return shipping label you provide. Oracle returns the transfer
l The Data Transfer Utility is a full-featured command-line tool. For more information and
installation instructions, see Data Transfer Utility.
l The Console is an easy-to-use, partial-featured browser-based interface. For more
information, see Signing In to the Console.
You can perform many data transfer tasks using either the
Console or the Data Transfer Utility. However, there are
some tasks you can only perform using the Data Transfer
Utility (for example, creating and locking transfer devices).
Managing HDD Data Transfers describes the management
tasks in detail and guides you to the appropriate
management interface to use for each task
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
See Preparing for Data Transfers for specific information about the policies required for data
transfer. If you're new to policies, see Getting Started with Policies and Common Policies.
l For information about installing and using the Data Transfer Utility, see Data Transfer
Utility.
l For information on performing prerequisite tasks for data transfer, see Preparing for
Data Transfers.
l For task documentation related to HDD data transfer, see Managing HDD Data
Transfers.
l For instructions on how to create a bucket, see "Putting Data into Object Storage" in the
Oracle Cloud Infrastructure Getting Started Guide.
Access to resources is provided to groups using policies and then inherited by the users that
are assigned to those groups. Data transfer requires the creation of two distinct groups. One
group is for data transfer administrators who can create and manage transfer jobs. Another
group is for data transfer upload users who can upload data to Object Storage. For details on
creating groups, see Managing Groups.
l The data transfer administrator group requires an authorization policy that includes the
following:
Allow group <group_name> to manage data-transfer-jobs in compartment <compartment_name>
l The data transfer upload user group requires an authorization policy that includes the
following:
Allow group <group_name> to manage buckets in compartment <compartment_name> where all {
request.permission='BUCKET_READ' }
The Oracle Cloud Infrastructure administrator then adds a user to each of the data transfer
groups created. For details on creating users, see Managing Users.
Many data transfer tasks can be performed either using the Console or using the Data
Transfer Utility. However, some data transfer tasks can only be performed using the Data
Transfer Utility. This document guides you to the appropriate management interface to use for
each task.
The Data Transfer Utility must be run as the root user because this utility formats, encrypts,
and mounts HDDs.
A transfer job represents the collection of files that you want to transfer and signals the
intention to upload those files to Oracle Cloud Infrastructure. A transfer job combines at least
one transfer device with a transfer package. You must know which compartment and Object
Storage bucket the data will be uploaded to. You must also create the transfer job in the same
compartment as the upload bucket and supply a human-readable name for the transfer job.
Avoid entering confidential information when providing job names.
Creating a transfer job returns a job ID that you will use in other transfer tasks. For example:
ocid1.datatransferjob.region1.phx.aaaaaaaag3maq2ufbygw243vw5hkz7mnayyt2zb6v6y5weidmpjhha5e26va
To display the list of transfer jobs using the Data Transfer Utility
At the command prompt on the host, run dts job list to display the list of transfer jobs.
dts job list --compartment-id <compartment_id>
To display the details of a transfer job using the Data Transfer Utility
At the command prompt on the host, run dts job show to display the details of a transfer
job.
dts job show --job-id <job_id>
To edit the name of a transfer job using the Data Transfer Utility
At the command prompt on the host, run dts job update to edit the name (--display-name)
of a transfer job.
dts job update --job-id <job_id> --display-name <display_name>
Typically, you would delete a transfer job early in the transfer process and before you create
any transfer packages or devices. For example, you initiated the data transfer by creating a
transfer job, but changed your mind. If you want to delete a transfer job later in the transfer
process, you must first delete all transfer packages and devices associated with the transfer
job.
A device can be attached to one package, detached, and then attached to another package.
For example:
/mnt/orcdts_DJZNWK3ET
Registering a transfer device encrypts the HDD and generates a strong encryption
passphrase. The encryption passphase is displayed to standard output to the data transfer
administrator user and cannot be retrieved again. Create a local, secure copy of the
encryption passphrase, if you need to reference the passphrase again.
Creating a transfer device requires the job ID returned from when you created the transfer
job and the path to the attached HDD (for example, /dev/sdb).
Typically, you would delete a transfer device during the device preparation process. You
created, attached, and/or copied data to the transfer device, but have changed your mind
about shipping the device. If you want to reuse the device, remove all file systems and create
the device again.
You would cancel a transfer device if you shipped this device to Oracle, but have changed your
mind about uploading the files. You can cancel a device in a transfer package, while allowing
the file upload from other devices.
Oracle cannot process canceled transfer devices. Oracle returns canceled transfer devices to
the sender.
You can only copy regular files to transfer devices. Special files (links, sockets, pipes, etc.)
cannot be copied directly. If you need to transfer special files, create a tar archive of the files
and copy the tar archive to the transfer device.
After copying your data to a transfer device, you need to generate a manifest file using the
Data Transfer Utility. The manifest contains an index of all of the copied files, as well as
generated data integrity hashes. The Data Transfer Utility also encrypts and copies the
config_upload_user configuration file to the transfer device. This configuration file
describes the temporary IAMdata transfer upload user you create in step 5 of the "Performing
prerequisite tasks in preparation for transfer data" section. Oracle uses the credentials and
entries defined in the config_upload_user file when processing the transfer device to upload
files from the HDD to Oracle Cloud Infrastructure Object Storage. Oracle cannot upload data
from the transfer devices without the correct credentials defined in this configuration file. See
Data Transfer Utility for more information about the required configuration files.
You can generate a dry-run report to review the transfer results before the actual data upload.
The report compares the contents of the generated manifest file with the contents of the
target bucket. This report can help determine if you have duplicate files or naming collision
issues.
Locking a transfer device safely unmounts the HDD and removes the encryption passphrase
from the host. You will be prompted for the encryption passphrase when you lock the transfer
device.
If you need to unlock the transfer device again, you need the encryption passphrase that was
generated when you created the transfer device.
Creating a transfer package initiates the paperwork required for shipping the HDDs to Oracle
and tracks the associated transfer devices and shipment information.
Creating a transfer package requires the job ID returned from when you created the transfer
job. For example:
ocid1.datatransferjob.region1.phx.aaaaaaaag3maq2ufbygw243vw5hkz7mnayyt2zb6v6y5weidmpjhha5e26va
To display the details of a transfer package using the Data Transfer Utility
At the command prompt on the host, run dts package show to display the details of a
transfer package.
dts package show --job-id <job_id> --package-label <package_label>
You will need to edit the transfer package and supply the tracking information when you ship
the package.
Typically, you would delete a transfer package early in the transfer process and before you
created any transfer devices. You initiated the transfer job and package, but have changed
your mind. If you delete a transfer package later in the transfer process, you must first delete
all associated transfer devices. You cannot delete a transfer package once the package has
been shipped to Oracle.
You would cancel a transfer package if you have shipped the transfer package, but have
changed your mind. You must cancel all transfer devices associated with the transfer package
before you can cancel the transfer package. Oracle cannot process canceled transfer
packages. Oracle returns canceled transfer packages to the sender.
You attach a transfer device to a transfer package after you have copied your data onto the
device, generated the required manifest file, run and reviewed the dry-run report, and then
locked the transfer device in preparation for shipment.
You have attached a transfer device to a transfer package, but have changed your mind about
shipping that device with the transfer package. You can also detach a transfer device from one
transfer package and attach that device to a different transfer package.
You can find the shipping address in the transfer package details.
To get the shipping address for a transfer package using the Console
1. Open the Console, click Storage, and then click Data Transfer.
2. Find the transfer job for which you want to see the details.
3. Click the Actions icon ( ), and then click View Details.
Alternatively, click on the hyper-linked name of the transfer job.
A list of transfer packages that have already been created is displayed.
4. Find the transfer package for which you want to see the details.
5. Click the Actions icon ( ), and then click View Details.
Alternatively, click on the hyper-linked name of the transfer job.
To get the shipping address for a transfer package using the Data Transfer
Utility
At the command prompt on the host, run dts package show to get the shipping address for a
transfer package.
dts package show --job-id <job_id> --package-label <package_label>
Include the required return shipping label in the box when packaging transfer devices for
shipment.
After delivering the transfer package to the shipping vendor, update the transfer package with
the tracking information.
To update the transfer package with tracking information using the Console
1. Open the Console, click Storage, and then click Data Transfer.
2. Find the transfer job for which you want to see the associated transfer packages.
3. Click the Actions icon ( ), and then click View Details.
A list of transfer packages that have already been created is displayed.
4. Find the transfer package that you want to edit.
5. Click the Actions icon ( ), and then click View Details.
6. Click Edit.
7. Enter the Tracking ID and the Return Tracking ID.
8. Click Edit Transfer Package.
To update the transfer package with tracking information using the Data
Transfer Utility
At the command prompt on the host, run dts package ship to update the transfer package
tracking information.
dts package ship --job-id <job_id> --package-label <package_label> --package-vendor <vendor_name> --
tracking-number <tracking_number> --return-tracking-number <return_tracking_number>
When Oracle has processed the transfer devices associated with a transfer package, the
status of the transfer package changes to Processed. When Oracle has shipped the transfer
devices associated with a transfer package, the status of the transfer package changes to
Returned.
To check the status of a transfer package using the Data Transfer Utility
At the command prompt on the host, run dts package show to show the status of a transfer
package.
dts package show --job-id <job_id> --package-label <package_label> --device-label <device_label>
manifest file to the contents of the target Oracle Cloud Infrastructure Object Storage bucket
after file upload.
The top of the log report summarizes the overall file processing status:
P - Present: The file is present in both the device and the target bucket
M - Missing: The file is present in the device but not the target bucket. It was likely uploaded and
then deleted by another user before the summary was generated.
C - Name Collision: The file is present in the manifest but a file with the same name but different
contents is present in the target bucket.
U - Unreadable: The file is not readable from the disk
N - Name Too Long: The file name on disk is too long and could not be uploaded
Typically, you would close a transfer job when no further transfer job activity is required or
possible. Closing a transfer job requires that the status of all associated transfer packages be
returned, canceled, or deleted. In addition, the status of all associated transfer devices must
be complete, in error, missing, canceled, or deleted.
The Database service supports several types of DB Systems, ranging in size, price, and
performance. For details about each type of system, start with the following topics.
l Exadata DB Systems
l Bare Metal and Virtual Machine DB Systems
License Types
Oracle Cloud Infrastructure supports a licensing model with two license types. With License
included, the cost of the cloud service includes a license for the Database service. With
Bring Your Own License (BYOL), Oracle Database customers with an Unlimited License
Agreement or Non-Unlimited License Agreement can use their license with Oracle Cloud
Infrastructure. You do not need separate on-premises licenses and cloud licenses. BYOL DB
instances support all advanced Database service manageability functionality, including
backing up and restoring a DB system, patching, and Oracle Data Guard.
You can enable BYOL when you launch a DB system. Enabling BYOL impacts how the usage
data for the instance is metered and subsequent billing. For database versions 12.2, 12.1, and
11.2, the Database service supports BYOL for the following shapes and editions:
l If you enable BYOL, you cannot switch between the BYOL and license-included licensing
model on the same instance. Instead, you have to terminate and then recreate the
instance.
l The Database service supports BYOL only for customers who use the Universal Credit
Plan. Non-metered customers cannot use BYOL. Existing customers can migrate from a
non-metered model to a Universal Credit Plan.
l You can only use the options you purchased as part of your ULA.
l If you have Standard or Enterprise Licenses with additional options, you need to use a
Standard Edition or Enterprise Edition license.
l If you have any additional database option other than RAC, Active Data Guard, Database
In-Memory, or Multitenant, you need to use Enterprise Edition High Performance.
l If you have Active Data Guard, Database In-Memory, or Multitenant, you need to use
Enterprise Edition Extreme Performance. If you choose the Extreme Performance
edition for a RAC configuration (for example, 2-node RAC or 2-node RAC on VMs), then
the additional OCPUs will be charged at the RAC OCPU pricing.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For more information on tenancies and compartments, see "Key Concepts and Terminology"
in the Oracle Cloud Infrastructure Getting Started Guide. For general information about using
the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
Exadata DB Systems
Exadata DB Systems allow you to leverage the power of Exadata within the Oracle Cloud
Infrastructure. An Exadata DB System consists of a quarter rack, half rack, or full rack of
compute nodes and storage servers, tied together by a high-speed, low-latency InfiniBand
network and intelligent Exadata software. You can configure automatic backups, optimize for
different workloads, and scale up the system to meet increased demands.
Subscription Types
You have a choice of subscription types for Exadata DB Systems:
Metering Frequency
Monthly metering is the only available option. You are billed for each Exadata DB System that
you use, and not for each database deployment that you use.
l Scaling within an Exadata DB System lets you modify compute node processing power
within the system.
l Scaling across Exadata DB System configurations lets you move to a different
configuration, for example, from a quarter rack to a half rack.
If an Exadata DB System requires more compute node processing power, you can scale up the
number of enabled CPU cores in the system. For a non-metered Exadata DB System, you can
temporarily modify the compute node processing power (bursting) or add compute node
processing power on a more permanent basis. For a metered Exadata DB System, you can
simply modify the number of enabled CPU cores.
For information on CPU cores per configuration, see System Configuration. For information on
scaling a system, see To scale an Exadata DB System.
Scaling across Exadata DB System configurations enables you to move to a different system
configuration. This is useful when a database deployment requires:
l Processing power that is beyond the capacity of the current system configuration.
l Storage capacity that is beyond the capacity of the current system configuration.
l A performance boost that can be delivered by increasing the number of available
compute nodes.
l A performance boost that can be delivered by increasing the number of available
Exadata Storage Servers.
Scaling from a quarter rack to a half rack, or from a half rack to a full rack, requires that the
data associated with your database deployment is backed up and restored on a different
Exadata DB System, which requires planning and coordination between you and Oracle. To
start the process, submit a service request to Oracle.
System Configuration
Exadata DB Systems are offered in quarter rack, half rack or full rack configurations, and
each configuration consists of compute nodes and storage servers. The compute nodes are
each configured with a Virtual Machine (VM). You have root privilege for the compute node
VMs, so you can load and run additional software on them. However, you do not have
administrative access to the Exadata infrastructure components, including the physical
compute node hardware, network switches, power distribution units (PDUs), integrated lights-
out management (ILOM) interfaces, or the Exadata Storage Servers, which are all
administered by Oracle.
You have full administrative privileges for your databases, and you can connect to your
databases by using Oracle Net Services from outside the Oracle Cloud Infrastructure. You are
responsible for database administration tasks such as creating tablespaces and managing
database users. You can also customize the default automated maintenance set up, and you
control the recovery process in the event of a database failure.
The following table outlines the vital statistics for each system configuration.
Storage Configuration
When you launch an Exadata DB System, the storage space inside the Exadata Storage
Servers is configured for use by Oracle Automatic Storage Management (ASM). By default,
the following ASM disk groups are created:
l The DATA disk group is intended for the storage of Oracle Database data files.
l The RECO disk group is primarily used for storing the Fast Recovery Area (FRA), which
is an area of storage where Oracle Database can create and manage various files
related to backup and recovery, such as RMAN backups and archived redo log files.
l The DBFS and ACFS disk groups are system disk groups that support various
operational purposes. The DBFS disk group is primarily used to store the shared
clusterware files (Oracle Cluster Registry and voting disks), while the ACFS disk groups
are primarily used to store Oracle Database binaries. Compared to the DATA and RECO
disk groups, the system disk groups are so small that they are typically ignored when
discussing the overall storage capacity. You should not store Oracle Database data files
or backups inside the system disk groups.
The disk group names contain a short identifier string that is associated with your Exadata
Database Machine environment. For example, the identifier could be C2, in which case the
DATA disk group would be named DATAC2, the RECO disk group would be named RECOC2, and
so on.
If you choose to perform database backups to the Exadata storage, your choice profoundly
affects how storage space in the Exadata Storage Servers is allocated to the ASM disk groups.
If you choose to provision for backups on Exadata storage, approximately 40% of the
available storage space is allocated to the DATA disk group and approximately 60% is
allocated to the RECO disk group. If you choose not to provision for backups on Exadata
storage, approximately 80% of the available storage space is allocated to the DATA disk
group and approximately 20% is allocated to the RECO disk group. After the storage is
configured, the only way to adjust the allocation without reconfiguring the whole environment
is by submitting a service request to Oracle. For details, see My Oracle Support Note
2007530.1.
The following table shows how the usable storage capacity is allocated to the DATA and RECO
disk groups for each configuration option. The usable storage capacity is the storage that's
available for Oracle Database files after taking into account high-redundancy ASM mirroring
(triple mirroring), which is used to provide highly resilient database storage on all Exadata
DB Systems. The usable storage capacity does not factor in the effects of Exadata
compression capabilities, which can be used to increase the effective storage capacity.
When you launch an Exadata DB System using the Console or the API, the system is
provisioned to support Oracle databases. The service creates an initial database based on the
options you provide and some default options described later in this topic.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Database Admins Manage Database Systems lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for databases, see Details for the Database Service.
Prerequisites
l The public key, in OpenSSH format, from the key pair that you plan to use for
connecting to the DB System via SSH. A sample public key, abbreviated for readability,
is shown below.
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/Hc26biw3TXWGEakrK1OQ== rsa-key-20160304
l Exadata DB Systems require two separate VCN subnets: a client subnet for user data
and a backup subnet for backup traffic.
l Do not use a subnet that overlaps with 192.168.128.0/20. This restriction applies to both
the client subnet and backup subnet.
l You can define the client subnet as either a private subnet or a public subnet. However,
you must define the backup subnet as a public subnet to back up the database to Object
Storage.
l Oracle requires that you use a VCN Resolver for DNS name resolution for the client
subnet. It automatically resolves the Swift endpoints required for backing up databases,
patching, and updating the cloud tooling on an Exadata DB System.
For more information, see DNS in Your Virtual Cloud Network.
l Important! Properly configure the security list ingress and egress rules. The client
subnet must allow TCP and ICMP traffic between all nodes and all ports in the respective
subnet. If TCP connectivity fails across nodes, the Exadata DB System fails to provision.
For example, if the client subnet uses the source CIDR 10.0.5.0/24, create rules as
shown in the following example.
Ingress Rules:
Source: 10.0.5.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: All
Allows: TCP traffic for ports: all
Source: 10.0.5.0/24
IP Protocol: ICMP
Type and Code: All
Allows: ICMP traffic for: all types and codes
Egress Rules:
Destination: 10.0.5.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: All
Allows: TCP traffic for ports: all
Destination: 10.0.5.0/24
IP Protocol: ICMP
Type and Code: All
Allows: ICMP traffic for: all types and codes
For information about creating and editing rules, see Security Lists.
For the backup subnet, you'll need to configure only an egress rule to allow HTTPS
access to Object Storage. For details, see Backing Up an Exadata Database.
To simplify launching a DB System in the Console and when using the API, the following
default options are used for the initial database.
For a list of the database options that you can set in the Console, see To launch an Exadata DB
System.
Option Description
DB System Information
Option Description
Display A friendly, display name for the DB System. The name doesn't need
Name to be unique. An Oracle Cloud Identifier (OCID) will uniquely
identify the DB System.
Option Description
Shape The shape to use to launch the DB System. The shape determines
the type of DB System and the resources allocated to the system.
l BM.HighIO1.36: Provides a 1-node DB System (one bare
metal server), with up to 36 CPU cores, 512 GB memory, and
four 3.2 TB locally attached NVMe drives (12.8 TB total) to the
DB System.
l BM.DenseIO1.36: Provides a 1-node DB System (one bare
metal server), with up to 36 CPU cores, 512 GB memory, and
nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the
DB System.
l BM.RACLocalStorage1.72: Provides a 2-node
RAC DB System (two bare metal servers), with up to 36 CPU
cores on each node (72 total per cluster), 512 GB memory,
direct attached shared storage with twenty 3.2 TB SSD drives
(64 TB total).
Note that the 64 TB storage for this shape is available only in
the Phoenix (PHX) region and the Frankfurt (FRA) region.
Storage for the Ashburn (IAD) region is twenty 1.2 TB SSD
drives (24 TB total).
l Exadata.Quarter1.84: Provides a 2-node Exadata
DB System with 22 enabled CPU cores, with up to 62
additional CPU cores, 720 GB RAM per node, 288 TB of raw
storage, 84 TB of usable storage, and unlimited I/O. This
shape supports only the Enterprise Edition - Extreme
Performance.
l Exadata.Half1.168: Provides a 4-node Exadata DB System
with 44 enabled CPU cores, with up to 124 additional CPU
Option Description
Cluster Name A unique cluster name for a multi-node DB System. The name must
begin with a letter and contain only letters (a-z and A-Z), numbers
(0-9) and hyphens (-). The cluster name can be no longer than 11
characters and is not case sensitive.
Total Node The number of nodes in the DB System. The number depends on the
Count shape you select. You can specify 1 or 2 nodes for Virtual Machine
DB Systems except for VM.Standard1.1, which is a single-node DB
System.
Option Description
Oracle The database edition supported by the DB System. You can mix
Database supported database versions on the DB System, but not editions.
Software (The database edition cannot be changed and applies to all the
Edition databases in this DB System.)
CPU Core The number of CPU cores for the DB System. Displays only if you
Count select a shape that allows you to configure the number of cores. The
text below the field indicates the acceptable values for that shape.
For a multi-node DB System, the core count is evenly divided across
the nodes.
Bare Metal DB Systems only: You can increase the CPU cores to
accommodate increased demand after you launch the DB System.
License Type The type of license you want to use for the DB system. Your choice
affects metering for billing.
License included means the cost of the cloud service includes a
license for the Database service.
Bring Your Own License (BYOL) means you are an Oracle
Database customer with an Unlimited License Agreement or Non-
Unlimited License Agreement and want to use your license with
Oracle Cloud Infrastructure. This removes the need for separate on-
premises licenses and cloud licenses.
SSH Public The public key portion of the key pair you want to use for SSH
Key access to the DB System. To provide multiple keys, paste each key
on a new line. Make sure each key is on a single, continuous line.
The length of the combined keys cannot exceed 10,000 characters.
Option Description
Data Storage The percentage (40% or 80%) assigned to DATA storage (user data
Percentage and database files). The remaining percentage is assigned to RECO
storage (database redo logs, archive logs, and recovery manager
backups).
Network Information
Virtual Cloud The compartment containing the network in which to launch the
Network DB System.
Compartment
Option Description
Backup For Exadata DB Systems only. The subnet to use for the backup
Subnet network , which is typically used to transport backup information to
and from Oracle Cloud Infrastructure Object Storage, and for Data
Guard replication.
Do not use a subnet that overlaps with 192.168.128.0/20. This
restriction applies to both the client subnet and backup subnet.
If you plan to back up databases to Object Storage, see the network
prerequisites in Backing Up an Exadata Database.
Hostname Your choice of host name for the DB System. The host name must
Prefix begin with an alphabetic character and can contain a maximum of
30 alphanumeric characters, including hyphens (-).
Note: The host name must be unique within the subnet. If it is not
unique, the DB System will fail to provision.
Host Domain The domain name for the DB System. If the selected subnet uses
Name the Oracle-provided Internet and VCN Resolver for DNS name
resolution, this field displays the domain name for the subnet and it
can't be changed. Otherwise, you can provide your choice of a
domain name. Hyphens (-) are not permitted.
For Exadata DB Systems, if you plan to store database backups in
Object Storage, Oracle recommends using a VCN Resolver for DNS
name resolution for the client subnet as it automatically resolves
the Swift endpoints used for backups.
Host and Combines the host and domain names to display the fully qualified
Domain URL domain name (FQDN) for the database. The maximum length is 64
characters.
Option Description
Database Information
Database The name for the database. The database name must begin with an
Name alphabetic character and can contain a maximum of eight
alphanumeric characters. Special characters are not permitted.
Database The version of the initial database created on the DB System when it
Version is launched. After the DB System is active, you can create additional
databases by using the command line interface available on the DB
System. You can mix database versions on the DB System, but not
editions.
PDB Name Not applicable to version 11.2.0.4. The name of the pluggable
database. The PDB name must begin with an alphabetic character
and can contain a maximum of 30 alphanumeric characters. The
only special character permitted is the underscore ( _).
Database A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin.
Admin The password must be nine to thirty characters and contain at least
Password two uppercase, two lowercase, two numeric, and two special
characters. The special characters must be _, #, or -.
Automatic Check the check box to enable automatic incremental backups for
Backup: this database.
Option Description
Database Select the workload type that best suits your application.
Workload Online Transactional Processing (OLTP) configures the
database for a transactional workload, with a bias towards high
volumes of random data access.
Decision Support System (DSS) configures the database for a
decision support or data warehouse workload, with a bias towards
large data scanning operations.
Character The character set for the database. The default is AL32UTF8.
Set
National The national character set for the database. The default is
Character AL16UTF16.
Set
l Provisioning: Yellow icon. Resources are being reserved for the DB System, the
system is booting, and the initial database is being created. Provisioning can take
several minutes. The system is not ready to use yet.
l Available: Green icon. The DB System was successfully provisioned. A few
minutes after the system enters this state, you can SSH to it and begin using it.
l Starting: Yellow icon. The DB System is being powered on by the start or reboot
action in the Console or API.
l Stopping: Yellow icon. The DB System is being powered off by the stop or reboot
action in the Console or API.
l Stopped: Yellow icon. The DB System was powered off by the stop action in the
Console or API.
l Terminating: Gray icon. The DB System is being deleted by the terminate action
in the Console or API.
l Terminated: Gray icon. The DB System has been deleted and is no longer
available.
l Failed: Red icon. An error condition prevented the provisioning or continued
operation of the DB System.
You can also check the status of DB Systems and database nodes using the ListDbSystems or
ListDbNodes API operations, which return the lifecycleState attribute.
l Stop: Shuts down the node. After the node is powered off, the Start action is
enabled.
l Reboot: Shuts down the node, and then restarts it.
Resource Billing
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the system you want to scale and click its highlighted
name.
The system details are displayed.
3. Click Scale Up/Down and then change the number in Total CPU Core Count. The
text below the field indicates the acceptable values, based on the shape used when the
DB System was launched.
4. Click Scale Up/Down DB System.
Note
1. Open the Console, click Database, and then choose your Compartment.
A list of DB Systems is displayed.
2. For the DB System you want to terminate, click the Actions icon ( ), and then click
Terminate.
3. Confirm when prompted.
The DB System's icon indicates Terminating.
At this point, you cannot connect to the system and any open connections are terminated.
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the system you want to scale and click its highlighted
name.
The system details are displayed.
3. Click Scale Up/Down OCPU and then change the number.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
DB Systems:
l GetDbSystem
l LaunchDbSystem
l ListDbSystems
l TerminateDbSystem
Database homes:
l GetDbHome
l ListDbHomes
Databases:
l GetDatabase
l ListDatabases
Nodes:
l ListDbSystemShapes
l ListDbVersions
All the traffic in an Exadata DB System is, by default, routed through the data network. To
route backup traffic to the backup interface (BONDETH1), you need to configure a static route
on each of the compute nodes in the cluster.
Before you configure a static route on the compute nodes, keep the following in mind:
l The Exadata DB System's cloud network (VCN) must be configured with an internet
gateway. Add a route table rule to open the access to the Object Storage Service Swift
endpoint on CIDR 0.0.0.0/0. For more information, see Route Tables.
l Oracle recommends that you update the backup subnet's security list to disallow any
access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR
Ranges 129.146.0.0/16 (Phoenix region), 129.213.0.0/16 (Ashburn region), and
130.61.0.0/16 (Frankfurt region). For more information, see Security Lists.
l The network traffic between the system and Object Storage does not leave the cloud
and never reaches the public internet. For more information, see Connectivity to the
Internet.
Note
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile.
login as: opc
10.0.4.1
4. Create a new static rule for BONDETH1. Replace the file /etc/sysconfig/network-
scripts/route-bondeth1 with the following entries.
For Phoenix (PHX) region:
ADDRESS0=129.146.0.0
NETMASK0=255.255.0.0
GATEWAY0=<gateway_from_previous_step>
The file changes from the previous step take effect immediately after the ifdown and
ifup commands run.
6. Repeat the preceding steps on each compute node in the Exadata DB System.
Important!
DNS lets you use host names instead of IP addresses to communicate with a DB System. You
can use the Internet and VCN Resolver (the DNS capability built into the VCN) as described in
DNS in Your Virtual Cloud Network. Oracle recommends using a VCN Resolver for DNS name
resolution for the client subnet. It automatically resolves the Swift endpoints required for
backing up databases, patching, and updating the cloud tooling on an Exadata DB System.
Prerequisites
For SSH access to a compute node in an Exadata DB System, you'll need the following:
l The full path to the file that contains the private key associated with the public key used
when the system was launched.
l The public or private IP address of the DB System. Use the private IP address to
connect to the DB System from your on-premises VPN, or from within the virtual cloud
network (VCN). This includes connecting from a host located on-premises connecting
through a VPN to your VCN, or from another host in the same VCN. Use the DB System's
public IP address to connect to the system from outside the cloud (with no VPN). You
can find the IP addresses in the Oracle Cloud Infrastructure Console on the Database
page.
You can connect to the compute nodes in an Exadata DB System by using a Secure Shell (SSH)
connection. Most UNIX-style systems (including Linux, Solaris, BSD, and OS X) include an
SSH client by default. For Windows, you can download a free SSH client called PuTTY from
http://www.putty.org.
<private key> is the full path and name of the file that contains the private key associated
with the Exadata DB System you want to access.
Use the private or public IP address depending on your network configuration. For more
information, see Prerequisites.
2. Set the environment to the ASM instance. Depending on which node you are connecting
to, the ASM instance ID will vary, for example, +ASM1 , +ASM2, and so on.
[root@ed1db01 ]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
4. Connect as the oracle user and get the details about one of the databases using srvctl
command.
[root@ed1db01 ~]# su - oracle
[oracle@ed1db01 ~]$ . oraenv
ORACLE_SID = [oracle] ? cdbm01
The Oracle base has been set to /u02/app/oracle
[oracle@ed1db01 ~]$ srvctl config database -d cdbm01
Database unique name: cdbm01 <<== DB unique name
Database name:
Oracle home: /u02/app/oracle/product/12.1.0/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/cdbm01/spfilecdbm01.ora
Password file: +DATAC1/cdbm01/PASSWORD/passwd
Domain: data.customer1.oraclevcn.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: racoper
Database instances: cdbm011,cdbm012 <<== SID
Configured nodes: ed1db01,ed1db02
Database is administrator managed
You can connect to a database with SQL Developer by using one of the following methods:
l Create a temporary SSH tunnel from your computer to the database. This method
provides access only for the duration of the tunnel. (When you are done using the
database, be sure to close the SSH tunnel by exiting the SSH session.)
l Open port 1521 for the Oracle default listener by updating the security list used for the
DB System. This method provides more durable access to the database. For more
information, see Updating the Security List.
After you've created an SSH tunnel or opened port 1521 as described above, you can connect
to a Exadata DB System using SCAN IP addresses or public IP addresses, depending on how
your network is set up and where you are connecting from. You can find the IP addresses in
the Console, in the Database details page.
l Use the private SCAN IP addresses, as shown in the following tnsnames.ora example:
testdb=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <scanIP1>)(PORT = 1521))
l Define an external SCAN name in your on-premises DNS server. Your application can
resolve this external SCAN name to the DB System's private SCAN IP addresses, and
then the application can use a connection string that includes the external SCAN name.
In the following tnsnames.ora example, extscanname.example.com is defined in the
on-premises DNS server.
testdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = <extscanname.example.com>)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
l When the client uses the public IP address, the client bypasses the SCAN listener and
reaches the node listener, so server side load balancing is not available.
l When the client uses the public IP address, it cannot take advantage of the VIP failover
feature. If a node becomes unavailable, new connection attempts to the node will hang
until a TCP/IP timeout occurs. You can set client side sqlnet parameters to limit the
TCP/IP timeout.
The following tnsnames.ora example shows a connection string that includes the CONNECT_
TIMEOUT parameter to avoid TCP/IP timeouts.
test=
(DESCRIPTION =
(CONNECT_TIMEOUT=60)
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP1>)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP2>)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
Note
Prerequisite
The compute nodes in the Exadata DB System must be configured to access the Oracle Cloud
Infrastructure Object Storage service. For more information, see Configuring a Static Route
for Accessing the Object Store.
The method for updating the tooling depends on the tooling release that is currently installed
on the compute node. Regardless of the method you use, be sure to repeat the update process
on each compute node in the cluster.
3. Use the following command to display information about the installed cloud tooling and
note the release label, shown in red below.
# rpm -qa|grep -i dbaastools_exa
dbaastools_exa-1.0-1+17.2.4.0.0BM_170508.0954.x86_64
3. Repeat the previous steps on each compute node in the Exadata DB System.
2. Examine the command response and determine the patch ID of the available cloud
tooling update.
The patch ID is listed in the patches group as the patchid value.
Cloud tooling updates are cumulative. So if multiple updates are available, you can
simply install the latest update. There is no need to install all of the updates in order.
3. If the available patch is newer than the currently installed tools, download and apply the
patch containing the cloud tooling update.
# /var/opt/oracle/exapatch/exadbcpatchsm -toolsinst_async <patchid>
where patchid is the patch ID that you located in the previous step.
4. Repeat the previous steps on each compute node in the Exadata DB System.
Note
# /var/opt/oracle/exapatch/exadbcpatchsm -get_
status transactionid
Note
You must update the cloud specific tooling on all the compute
nodes in your Exadata DB System before performing the
following procedures. For more information, see Updating
Tooling on an Exadata DB System.
Prerequisites
l Patches are stored in the Oracle Cloud Infrastructure Object Storage service, so the
Exadata DB System requires access the object store. For more information, see
Configuring a Static Route for Accessing the Object Store. Either the client subnet or the
backup subnet can be configured to access the object store.
l The Exadata DB System's cloud network (VCN) must be configured with an internet
gateway. Add a route table rule to open the access to the Object Storage Service Swift
endpoint on CIDR 0.0.0.0/0. For more information, see Route Tables.
Oracle recommends that you update the backup subnet's security list to disallow any
access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR
129.146.0.0/16. For more information, see Security Lists.
Note that the network traffic between the DB System and Object Storage does not leave
the cloud and never reaches the public internet. For more information, see Connectivity
to the Internet.
Managing Patches
where:
l -sshkey specifies the location of the SSH private key of the opc user, which is
used to connect to compute nodes in the cluster. This is an optional parameter.
-oh specifies a compute node and Oracle home directory for which you want to
list the available patches. In this context, an Oracle home directory may be an
Oracle Database home directory or the Oracle Grid Infrastructure home directory.
For example:
# /var/opt/oracle/exapatch/exadbcpatchmulti -list_patches -oh=exaverify-
73z1v1:/u02/app/oracle/product/12.1.0/dbhome_2 -sshkey=/root/y.priv
INFO: non async case
INFO: cmd is: /var/opt/oracle/exapatch/exadbcpatchsm -list_patches -
oh=/u02/app/oracle/product/12.1.0/dbhome_2
INFO: non async case
INFO: cmd is: /var/opt/oracle/exapatch/exadbcpatch -list_patches -patch_
homes=/u02/app/oracle/product/12.1.0/dbhome_2
Starting EXADBCPATCH
Logfile is /var/opt/oracle/log/exadbcpatch/exadbcpatch_2017-07-03_19:40:49.log
Config file is /var/opt/oracle/exapatch/exadbcpatch.cfg
INFO: dbversion detected : 12102
INFO: patching type : psu
Note
where:
l patchid identifies the patch to be pre-checked.
l -sshkey specifies the location of the SSH private key of the opc user, which is
used to connect to compute nodes in the cluster.
l -instanceN specifies a compute node and one or more Oracle home directories
that are subject to the patching operation. In this context, an Oracle home
directory may be an Oracle Database home directory or the Oracle Grid
Infrastructure home directory.
For example:
# /var/opt/oracle/exapatch/exadbcpatchmulti -precheck_async 12345678
-sshkey=/home/opc/.ssh/id_rsa
-instance1=hostname1:/u01/app/12.1.0.2/grid,/u01/app/oracle/product/12.1.0.2/dbhome_1
To apply a patch
You can apply a patch by using the exadbcpatchmulti command.
l Can be used to patch some or all of your compute nodes using one command.
l Coordinates multi-node patching in a rolling manner.
l Can execute patch-related SQL after patching all the compute nodes in the cluster.
You can perform a patching operation using the exadbcpatchmulti command as follows:
where:
l patchid identifies the patch to be applied.
l -sshkey specifies the location of the SSH private key of the opc user, which is
used to connect to compute nodes in the cluster.
l -instanceN specifies a compute node and one or more Oracle home directories
that are subject to the patching operation. In this context, an Oracle home
directory may be an Oracle Database home directory or the Oracle Grid
Infrastructure home directory.
l -run_datasql=1 instructs the exadbcpatchmulti command to execute patch-
related SQL commands.
Notes
For example:
# /var/opt/oracle/exapatch/exadbcpatchmulti -apply_async 23456789
-sshkey=/home/opc/.ssh/id_rsa
-instance1=hostname1:/u01/app/oracle/product/12.1.0.2/dbhome_1
-instance2=hostname2:/u01/app/oracle/product/12.1.0.2/dbhome_1
-run_datasql=1
You can use the opatch utility to determine the patches that have been applied to an Oracle
Database or Grid Infrastructure installation.
l Can be used to roll back a patch on some or all of your compute nodes using one
command.
l Coordinates multi-node operations in a rolling manner.
l Can execute rollback-related SQL after rolling back the patch on all the compute nodes
in the cluster.
You can perform a patch rollback operation using the exadbcpatchmulti command as
follows:
where:
l patchid identifies the patch to be rolled back.
l -sshkey specifies the location of the SSH private key of the opc user, which is
used to connect to compute nodes in the cluster.
l -instanceN specifies a compute node and one or more Oracle home directories
that are subject to the rollback operation. In this context, an Oracle home
directory may be an Oracle Database home directory or the Oracle Grid
Infrastructure home directory.
l -run_datasql=1 instructs the exadbcpatchmulti command to execute rollback-
related SQL commands.
Notes
For example:
# /var/opt/oracle/exapatch/exadbcpatchmulti -rollback_async 34567890
-sshkey=/home/opc/.ssh/id_rsa
-instance1=hostname1:/u01/app/12.1.0.2/grid
-instance2=hostname2:/u01/app/12.1.0.2/grid
-run_datasql=1
Enterprise Manager Database Express 12c (EM Express) is available on Exadata DB System
database deployments created using Oracle Database 12c Release 2 (12.2) or Oracle
Database 12c Release 1 (12.1).
How you access EM Express depends on whether you want to manage a CDB or PDB.
For both CDBs and PDBs, you must add the port to a security list as described in Updating the
Security List.
To confirm the port that is in use for a specific database, connect to the database as a
database administrator and execute the query shown in the following example:
SQL> select dbms_xdb_config.getHttpsPort() from dual;
DBMS_XDB_CONFIG.GETHTTPSPORT()
------------------------------
5502
S ETTING THE PORT FOR EM EXPRESS TO MANAGE A PDB (ORACLE DATABASE 12.1 ONLY)
In Oracle Database 12c Release 1, a unique HTTPS port must be configured for the root
container (CDB) and each PDB that you manage using EM Express.
To configure a HTTPS port so that you can manage a PDB with EM Express:
1. Invoke SQL*Plus and log in to the PDB as the SYS user with SYSDBA privileges.
2. Execute the DBMS_XDB_CONFIG.SETHTTPSPORT procedure.
SQL> exec dbms_xdb_config.sethttpsport(port-number)
ACCESSING EM EXPRESS
Before you access EM Express, add the port to the security list. See Updating the Security List.
After you update the security list, you can access EM Express by directing your browser to the
URL https://<node-ip-address>:<port>/em, where node-ip-address is the public IP
address of the compute node hosting EM Express, and port is the EM Express port used by the
database.
You can confirm the Database Control port for a database by searching for REPOSITORY_URL in
the $ORACLE_HOME/host_sid/sysman/config/emd.properties file.
Before you access Database Control, add the port for the database to the security list
associated with the Exadata DB System's client subnet. For more information, see Updating
the Security List.
After you update the security list, you can access Database Control by directing your browser
to the URL https://<node-ip-address>:<port>/em, where node-ip-address is the public
IP address of the compute node hosting Database Control, and port is the Database Control
port used by the database.
Before you can access EM Express or Database Control, you must add the port for the
database to the security list associated with the Exadata DB System's data (client) subnet. To
1. In the Console, click Database and locate the DB System in the list.
2. Note the DB System's Client Subnet name and click its Virtual Cloud Network.
3. Locate the subnet in the list, and then click its security list under Security Lists.
4. Click Edit All Rules and add an ingress rule with source CIDR=<source CIDR>,
protocol=TCP, and port=<port number or port range>.
The source CIDR should be the CIDR block that includes the ports you open for the client
connection.
For detailed information about creating or updating a security list, see Security Lists.
The utility is located in the /var/opt/oracle/dbaasapi/ directory on the compute nodes and
must be run as the root user.
Notes
You must update the cloud specific tooling on all the compute
nodes in your Exadata DB System before performing the
following procedures. For more information, see Updating
Tooling on an Exadata DB System.
Prerequisites
The following are prerequisites if you plan to create a database and store backups in the
Oracle Cloud Infrastructure Object Storage.
l The Exadata DB System requires access the object store. For more information, see
Configuring a Static Route for Accessing the Object Store.
l The DB System's cloud network (VCN) must be configured with an internet gateway.
Add a route table rule to open the access to the Object Storage Service Swift endpoint
on CIDR 0.0.0.0/0. For more information, see Route Tables.
Oracle recommends that you update the backup subnet's security list to disallow any
access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR
129.146.0.0/16. For more information, see Security Lists.
Note that the network traffic between the DB System and Object Storage does not leave
the cloud and never reaches the public internet. For more information, see Connectivity
to the Internet.
l An existing Object Storage bucket to use as the backup destination. You can use the
Console or the Object Storage API to create the bucket. For more information, see
Managing Buckets.
l A Swift password generated by Oracle Cloud Infrastructure. You can use the Console or
the IAM API to generate the password. For more information, see Working with Swift
Passwords.
l The user name specified in the backup configuration file must have tenancy-level access
to Object Storage. An easy way to do this is to add the user name to the Administrators
group. However, that allows access to all of the cloud services. Instead, an
administrator can create a policy that allows tenancy-level access to just Object
Storage. The following is an example of such a policy.
Allow group DBAdmins to manage buckets in tenancy
For more information about adding a user to a group, see Managing Groups. For more
information about policies, see Getting Started with Policies.
Creating a Database
The following procedure creates directory called dbinput, a sample input file called
myinput.json, and a sample output file called createdb.out.
3. Make a directory for the input file and change to the directory.
[root@dbsys ~]# mkdir –p /home/oracle/dbinput
# cd /home/oracle/dbinput
4. Create the input file in the directory. The following sample file will create a database
configured to store backups in an existing bucket in Object Storage. For parameter
descriptions, see Create Database Parameters.
{
"object": "db",
"action": "start",
"operation": "createdb",
"params": {
"nodelist": "",
"dbname": "exadb",
"edition": "EE_EP",
"version": "12.1.0.2",
"adminPassword": "WElcome#123_",
"sid": "exadb",
"pdbName": "PDB1",
"charset": "AL32UTF8",
"ncharset": "AL16UTF16",
"backupDestination": "OSS",
"cloudStorageContainer":
"https://swiftobjectstorage.<region>.oraclecloud.com/v1/mycompany/DBBackups",
"cloudStorageUser": "[email protected]",
"cloudStoragePwd": "1cnk!d0++ptETd&C;tHR"
},
"outputfile": "/home/oracle/createdb.out",
"FLAGS": ""
}
7. Create a JSON file to check the database creation status. Note the action of "status".
Replace the ID and the dbname with the values from the previous steps.
{
"object": "db",
"action": "status",
"operation": "createdb",
"id": 170,
"params": {
"dbname": "exadb"
},
"outputfile": "/home/oracle/createdb.out",
"FLAGS": ""
}
8. Run the utility with the status file as input and then check the utility output.
Rerun the status action regularly until the response indicates that the operation
succeeded or failed.
{
"msg" : "Sync sqlnet file...[done]\\n##Done executing tde\\nWARN: Could not register elogger_
parameters: elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events does not exist\\n##Invoking
assistant bkup\\nUsing cmd : /var/opt/oracle/ocde/assistants/bkup/bkup -out
/var/opt/oracle/ocde/res/bkup.out -sid=\"exadb1\" -reco_grp=\"RECOC1\" -
hostname=\"ed1db01.data.customer1.oraclevcn.com\" -oracle_
home=\"/u02/app/oracle/product/12.1.0/dbhome_5\" -dbname=\"exadb\" -dbtype=\"exarac\" -
exabm=\"yes\" -edition=\"enterprise\" -bkup_cfg_files=\"no\" -acfs_vol_
dir=\"/var/opt/oracle/dbaas_acfs\" -bkup_oss_url=\"bkup_oss_url\" -bkup_oss_user=\"bkup_oss_
user\" -version=\"12102\" -oracle_base=\"/u02/app/oracle\" -firstrun=\"no\" -action=\"config\" -
bkup_oss=\"no\" -bkup_disk=\"no\" -data_grp=\"DATAC1\" -action=config \\n\\n##Done executing
bkup\\nWARN: Could not register elogger_parameters: elogger.pm::_init: /var/opt/oracle/dbaas_
acfs/events does not existRemoved all entries from creg file : /var/opt/oracle/creg/exadb.ini
matching passwd or decrypt_key\\n\\n#### Completed OCDE Successfully ####\\nWARN: Could not
register elogger_parameters: elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events does not
exist",
"object" : "db",
"status" : "Success",
"errmsg" : "",
"outputfile" : "/home/oracle/createdb_exadb.out",
"action" : "start",
"id" : "170",
"operation" : "createdb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/createdb/170.log"
}
Parameter Description
Parameter Description
nodelist The value "" (an empty string). The database will be created
across all nodes in the cluster.
adminPassword The administrator (SYS and SYSTEM) password to use for the
new database, in quotes. The password must be nine to thirty
characters and contain at least two uppercase, two lowercase,
two numeric, and two special characters. The special characters
must be _, #, or -.
Parameter Description
Allowed values
AL32UTF8, AR8ADOS710, AR8ADOS720, AR8APTEC715,
AR8ARABICMACS, AR8ASMO8X, AR8ISO8859P6, AR8MSWIN1256,
AR8MUSSAD768, AR8NAFITHA711, AR8NAFITHA721,
AR8SAKHR706, AR8SAKHR707, AZ8ISO8859P9E, BG8MSWIN,
BG8PC437S, BLT8CP921, BLT8ISO8859P13, BLT8MSWIN1257,
BLT8PC775, BN8BSCII, CDN8PC863, CEL8ISO8859P14,
CL8ISO8859P5, CL8ISOIR111, CL8KOI8R, CL8KOI8U,
CL8MACCYRILLICS, CL8MSWIN1251, EE8ISO8859P2,
EE8MACCES, EE8MACCROATIANS, EE8MSWIN1250, EE8PC852,
EL8DEC, EL8ISO8859P7, EL8MACGREEKS, EL8MSWIN1253,
EL8PC437S, EL8PC851, EL8PC869, ET8MSWIN923, HU8ABMOD,
HU8CWI2, IN8ISCII, IS8PC861, IW8ISO8859P8,
IW8MACHEBREWS, IW8MSWIN1255, IW8PC1507, JA16EUC,
JA16EUCTILDE, JA16SJIS, JA16SJISTILDE, JA16VMS,
KO16KSCCS, KO16MSWIN949, LA8ISO6937, LA8PASSPORT,
LT8MSWIN921, LT8PC772, LT8PC774, LV8PC1117, LV8PC8LR,
LV8RST104090, N8PC865, NE8ISO8859P10, NEE8ISO8859P4,
RU8BESTA, RU8PC855, RU8PC866, SE8ISO8859P3,
TH8MACTHAIS, TH8TISASCII, TR8DEC, TR8MACTURKISHS,
TR8MSWIN1254, TR8PC857, US7ASCII, US8PC437, UTF8,
VN8MSWIN1258, VN8VN3, WE8DEC, WE8DG, WE8ISO8859P15,
WE8ISO8859P9, WE8MACROMAN8S, WE8MSWIN1252, WE8NCR4970,
WE8NEXTSTEP, WE8PC850, WE8PC858, WE8PC860, WE8ROMAN8,
ZHS16CGB231280, ZHS16GBK, ZHT16BIG5, ZHT16CCDC,
ZHT16DBT, ZHT16HKSCS, ZHT16MSWIN950, ZHT32EUC,
ZHT32SOPS, ZHT32TRIS
Parameter Description
For example:
"backupDestination":"BOTH"
Parameter Description
https://
swiftobjectstorage
.<region>.oraclecloud.com/v1/<tenant>/<bucket>
For example:
"cloudStorageContainer":"https://
swiftobjectstorage
.<region>.oraclecloud.com/v1/mycompany/DBBackups"
"cloudStorageUser":"[email protected]"
This is the user name you use to sign in to the Console. The user
name must be a member of the Administrators group, as
described in Prerequisites.
"cloudStoragePwd":"1cnk!d0++ptETd&C;tHR"
Parameter Description
outputfile The absolute path for the output of the request, for example,
"outputfile":"/home/oracle/createdb.out".
Deleting a Database
3. Make a directory for the input file and change to the directory.
[root@dbsys ~]# mkdir –p /home/oracle/dbinput
# cd /home/oracle/dbinput
4. Create the input file in the directory and specify the database name to delete and an
output file. For more information, see Delete Database Parameters .
{
"object": "db",
"action": "start",
"operation": "deletedb",
"params": {
"dbname": "exadb"
},
"outputfile": "/home/oracle/delete_exadb.out",
"FLAGS": ""
}
{
"msg" : "",
"object" : "db",
"status" : "Starting",
"errmsg" : "",
"outputfile" : "/home/oracle/deletedb.out",
"action" : "start",
"id" : "17",
"operation" : "deletedb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/deletedb/17.log"
}
7. Create a JSON file to check the database deletion status. Note the action of "status" in
the sample file below. Replace the ID and the dbname with the values from the previous
steps.
{
"object": "db",
"action": "status",
"operation": "deletedb",
"id": 17,
"params": {
"dbname": "exadb"
},
"outputfile": "/home/oracle/deletedb.out",
"FLAGS": ""
}
8. Run the utility with the status file as input and then check the utility output.
Rerun the status action regularly until the response indicates that the operation
succeeded.
[root@dbsys ~]# /var/opt/oracle/dbaasapi/dbaasapi -i db_status.json
{
"msg" : "Using cmd : su - root -c \"/var/opt/oracle/ocde/assistants/dg/dgcc -dbname exadb -
action delete\" \\n\\n##Done executing dg\\nWARN: Could not register elogger_parameters:
elogger.pm::_init: /var/opt/oracle/dbaas_acfs/events does not exist\\n##Invoking assistant
bkup\\nUsing cmd : /var/opt/oracle/ocde/assistants/bkup/bkup -out
/var/opt/oracle/ocde/res/bkup.out -bkup_oss_url=\"bkup_oss_url\" -bkup_daily_time=\"0:13\" -bkup_
oss_user=\"bkup_oss_user\" -dbname=\"exadb\" -dbtype=\"exarac\" -exabm=\"yes\" -firstrun=\"no\" -
action=\"delete\" -bkup_cfg_files=\"no\" -bkup_oss=\"no\" -bkup_disk=\"no\" -action=delete
\\n\\n##Done executing bkup\\nWARN: Could not register elogger_parameters: elogger.pm::_init:
/var/opt/oracle/dbaas_acfs/events does not exist\\n##Invoking assistant dbda\\nUsing cmd :
/var/opt/oracle/ocde/assistants/dbda/dbda -out /var/opt/oracle/ocde/res/dbda.out -em=\"no\" -pga_
target=\"2000\" -dbtype=\"exarac\" -sga_target=\"2800\" -action=\"delete\" -build=\"no\" -
nid=\"no\" -dbname=\"exadb\" -action=delete \\n",
"object" : "db",
"status" : "InProgress",
"errmsg" : "",
"outputfile" : "/home/oracle/deletedb.out",
"action" : "start",
"id" : "17",
"operation" : "deletedb",
"logfile" : "/var/opt/oracle/log/exadb/dbaasapi/db/deletedb/17.log"
}
Parameter Description
Parameter Description
outputfile The absolute path for the output of the request, for example,
"/home/oracle/deletedb.out".
l Create a backup configuration file that indicates the backup destination, when the
backup should run, and how long backups are retained. If the backup destination is
Object Storage, the file also contains the credentials to access the service.
l Associate the backup configuration file with a database. The database will be backed up
as scheduled, or you can create an on-demand backup.
Note
You must update the cloud specific tooling on all the compute
nodes in your Exadata DB System before performing the
following procedures. For more information, see Updating
Tooling on an Exadata DB System.
Prerequisites
l The Exadata DB System requires access the Oracle Cloud Infrastructure Object Storage
service. For more information, see Configuring a Static Route for Accessing the Object
Store.
l The Exadata DB System's cloud network (VCN) must be configured with an internet
gateway. Add a route table rule to open the access to the Object Storage Service Swift
endpoint on CIDR 0.0.0.0/0. For more information, see Route Tables.
Oracle recommends that you update the backup subnet's security list to disallow any
access from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR
Ranges 129.146.0.0/16 (Phoenix region) 129.213.0.0/16 (Ashburn region), and
130.61.0.0/16 (Frankfurt region). For more information, see Security Lists.
Note that the network traffic between the system and Object Storage does not leave the
cloud and never reaches the public internet. For more information, see Connectivity to
the Internet.
l An existing Object Storage bucket to use as the backup destination. You can use the
Console or the Object Storage API to create the bucket. For more information, see
Managing Buckets.
l A Swift password generated by Oracle Cloud Infrastructure. You can use the Console or
the IAM API to generate the password. For more information, see Working with Swift
Passwords.
l The user name specified in the backup configuration file must have tenancy-level access
to Object Storage. An easy way to do this is to add the user name to the Administrators
group. However, that allows access to all of the cloud services. Instead, an
administrator can create a policy that allows tenancy-level access to just Object
Storage. The following is an example of such a policy.
Allow group DBAdmins to manage buckets in tenancy
For more information about adding a user to a group, see Managing Groups. For more
information about policies, see Getting Started with Policies.
l Full (level 0) backup of the database followed by rolling incremental (level 1) backups
on a seven-day cycle (a 30-day cycle for the Object Storage destination).
l Full backup of selected system files.
l Automatic backups daily at a specific time set during the database deployment creation
process.
Retention period:
l Both Object Storage and local storage: 30 days, with the 7 most recent days' backups
available on local storage.
l Object Storage only: 30 days.
l Local storage only: 7 days.
Encryption:
l Both Object Storage and local storage: All backups to cloud storage are encrypted.
l Object Storage only: All backups to cloud storage are encrypted.
Managing Backups
Important!
$ $ORACLE_HOME/bin/olsnodes -n
The first node has the number 1 listed beside the node name.
vi bkup.cfg
bkup_disk=yes
bkup_oss=yes
bkup_oss_url=https://swiftobjectstorage.<region>.oraclecloud.com/v1/companyabc/DBBackups
[email protected]
bkup_oss_passwd=1cnk!d0++ptETd&C;tHR
bkup_oss_recovery_window=7
bkup_daily_time=06:45
5. Use the following command to install the backup configuration, configure the
credentials, schedule the backup, and associate the configuration with a database
name.
[root@dbsys bkup]# ./bkup -cfg bkup.cfg -dbname=<database_name>
When the scheduled backup runs, you can check its progress with the following command.
[root@dbsys bkup]# /var/opt/oracle/bkup_api/bkup_api bkup_status
Parameter Description
Parameter Description
https://
swiftobjectstorage
.
<region>
.
oraclecloud
.
com
/v1/<tenant>/<bucket>
Parameter Description
Parameter Description
To determine the first compute node, connect to any compute node as the grid user and
execute the following command:
$ $ORACLE_HOME/bin/olsnodes -n
The first node has the number 1 listed beside the node name.
2. Log in as opc and then sudo to the root user.
login as: opc
3. You can let the backup follow the current retention policy, or you can create a long-term
4. Exit the root-user command shell and disconnect from the compute node:
# exit
$ exit
By default, the backup is given a timestamp-based tag. To specify a custom backup tag, add
the --tag option to the bkup_api command; for example, to create a long-term backup with
the tag "monthly", enter the following command:
# /var/opt/oracle/bkup_api/bkup_api bkup_start --keep --tag=monthly
After you enter a bkup_api bkup_start command, the bkup_api utility starts the backup
process, which runs in the background. To check the progress of the backup process, enter the
following command:
# /var/opt/oracle/bkup_api/bkup_api bkup_status --dbname=<database_name>
1. Connect to the first compute node in your Exadata DB System as the opc user.
To determine the first compute node, connect to any compute node as the grid user and
execute the following command:
$ $ORACLE_HOME/bin/olsnodes -n
The first node has the number 1 listed beside the node name.
2. Start a root-user command shell:
$ sudo -s#
where dbname is the database name for the database that you want to act on.
A list of available backups is displayed.
4. Delete the backup you want:
# /var/opt/oracle/bkup_api/bkup_api bkup_delete --bkup=<backup-tag> --dbname=<database_name>
WHAT NEXT?
If you used Object Storage as a backup destination, you can display the backup files in your
bucket in the Console on the Storage page, by selecting Object Storage.
You can manually restore a database backup by using the RMAN utility. For information about
using RMAN, see the Oracle Database Backup and Recovery User's Guide for Release 12.2,
12.1, or 11.2.
You can manually restore a database backup by using the RMAN utility. For information about
using RMAN, see the Oracle Database Backup and Recovery User's Guide for Release 12.2,
12.1, or 11.2.
You can manage these systems by using the Console, API, Enterprise Manager, Enterprise
Manager Express, SQL Developer, and the dbcli CLI.
Note
l Standard Edition
l Enterprise Edition
l Enterprise Edition - High Performance
l Enterprise Edition - Extreme Performance (required for 2-node RAC DB Systems)
l 1-node DB Systems consist of a single bare metal server running Oracle Linux 6.8, with
locally attached NVMe storage. If the node fails, you can simply launch another system
and restore databases from current backups.
l 2-node RAC DB Systems consist of two bare metal server running Oracle Linux 6.8, in a
RAC configuration, with direct-attached shared storage. The cluster provides automatic
failover. This system supports only Enterprise Edition - Extreme Performance and is
recommended for production applications.
When you launch a Bare Metal DB System, you select a single Oracle Database Edition that
applies to all the databases on that DB System. The selected edition cannot be changed. Each
DB System can have multiple database homes, which can be different versions. Each
database home can have only one database, which is the same version as the database home.
When you launch a DB System, you choose a shape, which determines the resources allocated
to the DB System. The available shapes for a Bare Metal DB System are:
Storage Considerations
The shape you choose for a Bare Metal DB System determines its total raw storage, but other
options, like 2- or 3-way mirroring and the space allocated for data files, affect the amount of
usable storage on the system. The following table shows how various configurations affect the
usable storage for 1- and 2-node RAC Bare Metal DB Systems.
Note that the storage information in the table for the 2-node RAC shape applies only to the
Phoenix (PHX) region and the Frankfurt (FRA) region. Storage for the Ashburn (IAD) region is
24 TB SSD of raw storage with 2-way mirroring values of DATA 8.6 TB and RECO 1.6 TB, and
3-way mirroring values of DATA 5.4 TB and RECO 1 TB.
When you launch a Virtual Machine DB System, you select the Oracle Database Edition that
applies to the database on that DB System. The selected edition cannot be changed. Unlike a
Bare Metal DB System, a Virtual Machine DB System can have only a single database home.
The database home will have a single database, which will be the same version as the
database home.
Virtual Machine DB Systems also differ from Bare Metal DB Systems in the following ways:
l A Virtual Machine DB System database uses Oracle Cloud Infrastructure block storage
instead of local storage. You specify a storage size when you launch the DB system, and
you can scale up the storage as needed at any time.
l The number of CPU cores on an existing Virtual Machine DB System cannot be changed.
When you launch a DB System, you choose a shape, which determines the resources allocated
to the DB System. The following table shows the available shapes for a Virtual Machine DB
System.
VM.Standard1.1 1 7 GB
VM.Standard1.2 2 14 GB
VM.Standard1.4 4 28 GB
VM.Standard1.8 8 56 GB
VM.Standard1.16 16 112 GB
Virtual Machine DB Systems use Oracle Cloud Infrastructure block storage. The following
table shows details of the storage options for a Virtual Machine DB System. Total storage
includes available storage plus recovery logs.
256 712
512 968
1024 1480
2048 2656
4096 5116
6144 7572
8192 10032
10240 12488
12288 14944
14336 17404
16384 19860
18432 22320
20480 24776
22528 27232
24576 29692
26624 32148
28672 34608
30720 37064
32768 39520
34816 41980
36864 44436
38912 46896
40960 49352
For 2-node RAC Virtual Machine DB Systems, storage capacity is shared between the nodes.
Managing DB Systems
This topic explains how to launch, start, stop, terminate, scale, manage licenses for, and
check the status of a Bare Metal and Virtual Machine DB System, and set up DNS for a 1-node
or 2-node RAC DB System.
When you launch a DB System using the Console, the API, or the CLI, the system is
provisioned to support Oracle databases and an initial database is created based on the
options you provide and some default options described later in this topic.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Database Admins Manage Database Systems lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for databases, see Details for the Database Service.
Prerequisites
l The public key, in OpenSSH format, from the key pair that you plan to use for
connecting to the DB System via SSH. A sample public key, abbreviated for readability,
is shown below.
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAA....lo/gKMLVM2xzc1xJr/Hc26biw3TXWGEakrK1OQ== rsa-key-20160304
l You must use a public subnet. Do not use a subnet that overlaps with 192.168.16.16/28,
which is used by the Oracle Clusterware private interconnect on the database instance.
Specifying an overlapping subnet will cause the private interconnect to malfunction.
l For a 2-node RAC DB System, the subnet must have at least six available IP addresses.
Three of each subnet's IP addresses are reserved, so the minimum allowed subnet size
is /28. For more information, see Allowed VCN Size and Address Ranges.
l If you plan to back up your DB System to Object Storage, the VCN must have an
enabled internet gateway and a corresponding route rule for it. For more information,
see Backing Up to Oracle Cloud Infrastructure Object Storage and Connectivity to the
Internet.
l Each VCN subnet has a default security list that contains a rule to allow TCP traffic on
destination port 22 (SSH) from source 0.0.0.0/0 and any source port. You can update
the default security list or create new lists to allow other types of access, but this can be
done before or after you launch the DB System. For more information, see Security
Lists.
l For a 2-node RAC DB System, make sure port 22 is open for both ingress and egress on
the subnet, otherwise, the DB System might fail to provision successfully.
l If you need DNS name resolution for the system, decide whether to use a Custom
Resolver (your choice of DNS server) or the Internet and VCN Resolver (the
DNS capability built in to the VCN). For more information, see DNS in Your Virtual Cloud
Network.
To simplify launching a DB System in the Console and when using the API, the following
default options are used for the initial database and for any additional databases that you
create. (Several advanced options such as Time Zone can be set when you can use the dbcli
command line interface to create databases.)
For a list of the database options that you can set, see To launch a DB System.
To launch a DB System
1. Open the Console, click Database, choose your Compartment, and then click Launch
DB System.
2. In the Launch DB System dialog, enter the following:
Option Description
DB System Information
Display A friendly, display name for the DB System. The name doesn't need
Name to be unique. An Oracle Cloud Identifier (OCID) will uniquely
identify the DB System.
Option Description
Shape The shape to use to launch the DB System. The shape determines
the type of DB System and the resources allocated to the system.
l BM.HighIO1.36: Provides a 1-node DB System (one bare
metal server), with up to 36 CPU cores, 512 GB memory, and
four 3.2 TB locally attached NVMe drives (12.8 TB total) to the
DB System.
l BM.DenseIO1.36: Provides a 1-node DB System (one bare
metal server), with up to 36 CPU cores, 512 GB memory, and
nine 3.2 TB locally attached NVMe drives (28.8 TB total) to the
DB System.
l BM.RACLocalStorage1.72: Provides a 2-node
RAC DB System (two bare metal servers), with up to 36 CPU
cores on each node (72 total per cluster), 512 GB memory,
direct attached shared storage with twenty 3.2 TB SSD drives
(64 TB total).
Note that the 64 TB storage for this shape is available only in
the Phoenix (PHX) region and the Frankfurt (FRA) region.
Storage for the Ashburn (IAD) region is twenty 1.2 TB SSD
drives (24 TB total).
l Exadata.Quarter1.84: Provides a 2-node Exadata
DB System with 22 enabled CPU cores, with up to 62
additional CPU cores, 720 GB RAM per node, 288 TB of raw
storage, 84 TB of usable storage, and unlimited I/O. This
shape supports only the Enterprise Edition - Extreme
Performance.
l Exadata.Half1.168: Provides a 4-node Exadata DB System
with 44 enabled CPU cores, with up to 124 additional CPU
Option Description
Cluster Name A unique cluster name for a multi-node DB System. The name must
begin with a letter and contain only letters (a-z and A-Z), numbers
(0-9) and hyphens (-). The cluster name can be no longer than 11
characters and is not case sensitive.
Total Node The number of nodes in the DB System. The number depends on the
Count shape you select. You can specify 1 or 2 nodes for Virtual Machine
DB Systems except for VM.Standard1.1, which is a single-node DB
System.
Option Description
Oracle The database edition supported by the DB System. You can mix
Database supported database versions on the DB System, but not editions.
Software (The database edition cannot be changed and applies to all the
Edition databases in this DB System.)
CPU Core The number of CPU cores for the DB System. Displays only if you
Count select a shape that allows you to configure the number of cores. The
text below the field indicates the acceptable values for that shape.
For a multi-node DB System, the core count is evenly divided across
the nodes.
Bare Metal DB Systems only: You can increase the CPU cores to
accommodate increased demand after you launch the DB System.
License Type The type of license you want to use for the DB system. Your choice
affects metering for billing.
License included means the cost of the cloud service includes a
license for the Database service.
Bring Your Own License (BYOL) means you are an Oracle
Database customer with an Unlimited License Agreement or Non-
Unlimited License Agreement and want to use your license with
Oracle Cloud Infrastructure. This removes the need for separate on-
premises licenses and cloud licenses.
SSH Public The public key portion of the key pair you want to use for SSH
Key access to the DB System. To provide multiple keys, paste each key
on a new line. Make sure each key is on a single, continuous line.
The length of the combined keys cannot exceed 10,000 characters.
Option Description
Data Storage The percentage (40% or 80%) assigned to DATA storage (user data
Percentage and database files). The remaining percentage is assigned to RECO
storage (database redo logs, archive logs, and recovery manager
backups).
Network Information
Virtual Cloud The compartment containing the network in which to launch the
Network DB System.
Compartment
Option Description
Backup For Exadata DB Systems only. The subnet to use for the backup
Subnet network , which is typically used to transport backup information to
and from Oracle Cloud Infrastructure Object Storage, and for Data
Guard replication.
Do not use a subnet that overlaps with 192.168.128.0/20. This
restriction applies to both the client subnet and backup subnet.
If you plan to back up databases to Object Storage, see the network
prerequisites in Backing Up an Exadata Database.
Hostname Your choice of host name for the DB System. The host name must
Prefix begin with an alphabetic character and can contain a maximum of
30 alphanumeric characters, including hyphens (-).
Note: The host name must be unique within the subnet. If it is not
unique, the DB System will fail to provision.
Host Domain The domain name for the DB System. If the selected subnet uses
Name the Oracle-provided Internet and VCN Resolver for DNS name
resolution, this field displays the domain name for the subnet and it
can't be changed. Otherwise, you can provide your choice of a
domain name. Hyphens (-) are not permitted.
For Exadata DB Systems, if you plan to store database backups in
Object Storage, Oracle recommends using a VCN Resolver for DNS
name resolution for the client subnet as it automatically resolves
the Swift endpoints used for backups.
Host and Combines the host and domain names to display the fully qualified
Domain URL domain name (FQDN) for the database. The maximum length is 64
characters.
Option Description
Database Information
Database The name for the database. The database name must begin with an
Name alphabetic character and can contain a maximum of eight
alphanumeric characters. Special characters are not permitted.
Database The version of the initial database created on the DB System when it
Version is launched. After the DB System is active, you can create additional
databases by using the command line interface available on the DB
System. You can mix database versions on the DB System, but not
editions.
PDB Name Not applicable to version 11.2.0.4. The name of the pluggable
database. The PDB name must begin with an alphabetic character
and can contain a maximum of 30 alphanumeric characters. The
only special character permitted is the underscore ( _).
Database A strong password for SYS, SYSTEM, TDE wallet, and PDB Admin.
Admin The password must be nine to thirty characters and contain at least
Password two uppercase, two lowercase, two numeric, and two special
characters. The special characters must be _, #, or -.
Automatic Check the check box to enable automatic incremental backups for
Backup: this database.
Option Description
Database Select the workload type that best suits your application.
Workload Online Transactional Processing (OLTP) configures the
database for a transactional workload, with a bias towards high
volumes of random data access.
Decision Support System (DSS) configures the database for a
decision support or data warehouse workload, with a bias towards
large data scanning operations.
Character The character set for the database. The default is AL32UTF8.
Set
National The national character set for the database. The default is
Character AL16UTF16.
Set
l Provisioning: Yellow icon. Resources are being reserved for the DB System, the
system is booting, and the initial database is being created. Provisioning can take
several minutes. The system is not ready to use yet.
l Available: Green icon. The DB System was successfully provisioned. A few
minutes after the system enters this state, you can SSH to it and begin using it.
l Starting: Yellow icon. The DB System is being powered on by the start or reboot
action in the Console or API.
l Stopping: Yellow icon. The DB System is being powered off by the stop or reboot
action in the Console or API.
l Stopped: Yellow icon. The DB System was powered off by the stop action in the
Console or API.
l Terminating: Gray icon. The DB System is being deleted by the terminate action
in the Console or API.
l Terminated: Gray icon. The DB System has been deleted and is no longer
available.
l Failed: Red icon. An error condition prevented the provisioning or continued
operation of the DB System.
You can also check the status of DB Systems and database nodes by using the ListDbSystems
or ListDbNodes API operations, which return the lifecycleState attribute.
1. Open the Console, click Database and then choose your Compartment.
2. In the list of DB Systems, find the DB System you want to stop or start, and then click
its name to display details about it.
3. In the list of nodes, click the Actions icon ( ) for a node and then click one of the
following actions:
l Start: Restarts a stopped node. After the node is restarted, the Stop action is
enabled.
l Stop: Shuts down the node. After the node is powered off, the Start action is
enabled.
l Reboot: Shuts down the node, and then restarts it.
Resource Billing
Note
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the system you want to scale and click its highlighted
name.
The system details are displayed.
3. Click Scale Up/Down and then change the number in Total CPU Core Count. The
text below the field indicates the acceptable values, based on the shape used when the
DB System was launched.
4. Click Scale Up/Down DB System.
Note
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the system you want to scale up and click its highlighted
name.
The system details are displayed.
3. Click Scale Storage Up, and then select the new storage size from the drop-down list.
4. Click Scale Storage Up.
To terminate a DB System
Terminating a DB System permanently deletes it and any databases running on it.
Note
1. Open the Console, click Database, and then choose your Compartment.
A list of DB Systems is displayed.
2. For the DB System you want to terminate, click the Actions icon ( ) and then click
Terminate.
3. Confirm when prompted.
The DB System's icon indicates Terminating.
At this point, you cannot connect to the system and any open connections will be terminated.
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the system you want to scale and click its highlighted
name.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
DB Systems:
l GetDbSystem
l LaunchDbSystem
l ListDbSystems
l TerminateDbSystem
Database homes:
l GetDbHome
l ListDbHomes
Databases:
l GetDatabase
l ListDatabases
Nodes:
l ListDbSystemShapes
l ListDbVersions
For the complete list of APIs for the Database service, see Database Service API.
DNS lets you use host names instead of IP addresses to communicate with a DB System. You
can use the Internet and VCN Resolver (the DNS capability built into the VCN) as described in
DNS in Your Virtual Cloud Network.
Alternatively, you can use your choice of DNS server. You associate the host name and
domain name to the public or private IP address of the DB System. You can find the host and
domain names and IP addresses for the DB System in the Oracle Cloud Infrastructure Console
on the Database page.
To associate the host name to the DB System's public or private IP address, contact your
DNS administrator and request a custom DNS record for the DB System’s IP address. For
example, if your domain is example.com and you want to use clouddb1 as the host name, you
would request a DNS record that associates clouddb1.example.com to your DB System's IP
address.
If you provide the public IP address to your DNS administrator as described above, you should
also associate a custom domain name to the DB System's public IP address:
1. Register your domain name through a third-party domain registration vendor, such as
register.com.
2. Resolve your domain name to the DB System's public IP address, using the third-party
domain registration vendor console. For more information, refer to the third-party
domain registration documentation.
Connecting to a DB System
This topic explains how to connect to an active DB System using SSH or SQL Developer. How
you connect depends on how your cloud network is set up. You can find information on various
Prerequisites
l The full path to the file that contains the private key associated with the public key used
when the DB System was launched.
l The public or private IP address of the DB System. Use the private IP address to
connect to the DB System from your on-premises VPN, or from within the virtual cloud
network (VCN). This includes connecting from a host located on-premises connecting
through a VPN to your VCN, or from another host in the same VCN. Use the DB System's
public IP address to connect to the system from outside the cloud (with no VPN). You
can find the IP addresses in the Oracle Cloud Infrastructure Console on the Database
page.
You can connect to a DB System by using a Secure Shell (SSH) connection. Most UNIX-style
systems (including Linux, Solaris, BSD, and OS X) include an SSH client by default. For
Windows, you can download a free SSH client called PuTTY from http://www.putty.org.
When connecting to a multi-node DB System, you'll SSH to each individual node in the cluster.
<private key> is the full path and name of the file that contains the private key associated
with the DB System you want to access.
Use the DB System's private or public IP address depending on your network configuration.
For more information, see Prerequisites.
2. Set the environment to the ASM instance. Depending on which node you are connecting
to, the ASM instance ID will vary, for example, +ASM1 , +ASM2, and so on.
[root@ed1db01 ]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
4. Connect as the oracle user and get the details about one of the databases using srvctl
command.
[root@ed1db01 ~]# su - oracle
[oracle@ed1db01 ~]$ . oraenv
ORACLE_SID = [oracle] ? cdbm01
The Oracle base has been set to /u02/app/oracle
[oracle@ed1db01 ~]$ srvctl config database -d cdbm01
Database unique name: cdbm01 <<== DB unique name
Database name:
Oracle home: /u02/app/oracle/product/12.1.0/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/cdbm01/spfilecdbm01.ora
Password file: +DATAC1/cdbm01/PASSWORD/passwd
Domain: data.customer1.oraclevcn.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: racoper
Database instances: cdbm011,cdbm012 <<== SID
Configured nodes: ed1db01,ed1db02
Database is administrator managed
5. (Skip this step for Exadata DB Systems.) Set the ORACLE_SID and ORACLE_UNIQUE_
Connected to:
Oracle Database 12c EE Extreme Perf Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label
Security,
OLAP, Advanced Analytics and Real Application Testing options
You can connect to a database with SQL Developer by using one of the following methods:
l Create a temporary SSH tunnel from your computer to the database. This method
provides access only for the duration of the tunnel. (When you are done using the
database, be sure to close the SSH tunnel by exiting the SSH session.)
l Open port 1521 for the Oracle default listener by updating the security list used for the
DB System. This method provides more durable access to the database. For more
information, see Updating the Security List for the DB System.
After you've created an SSH tunnel or opened port 1521 as described above, start your SQL
Developer client and create a connection using the following connection details:
l Port: 9999 (or the port of your choice) if using an SSH tunnel, or 1521 if not using a
tunnel.
l Service name: The concatenated Database Unique Name and Host Domain
Name, for example, db1_phx1tv.mycompany.com. You can find both these names in
the Console by clicking Database and then clicking the DB System name for details.
After you've created an SSH tunnel or opened port 1521 as described above, you can connect
to a multi-node DB System using SCAN IP addresses or public IP addresses, depending on
how your network is set up and where you are connecting from. You can find the IP addresses
in the Console, in the Database details page.
l Use the private SCAN IP addresses, as shown in the following tnsnames.ora example:
testdb=
(DESCRIPTION =
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <scanIP1>)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <scanIP2>)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
l Define an external SCAN name in your on-premises DNS server. Your application can
resolve this external SCAN name to the DB System's private SCAN IP addresses, and
then the application can use a connection string that includes the external SCAN name.
In the following tnsnames.ora example, extscanname.example.com is defined in the
on-premises DNS server.
testdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = <extscanname.example.com>)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
l When the client uses the public IP address, the client bypasses the SCAN listener and
reaches the node listener, so server side load balancing is not available.
l When the client uses the public IP address, it cannot take advantage of the VIP failover
feature. If a node becomes unavailable, new connection attempts to the node will hang
until a TCP/IP timeout occurs. You can set client side sqlnet parameters to limit the
TCP/IP timeout.
The following tnsnames.ora example shows a connection string that includes the CONNECT_
TIMEOUT parameter to avoid TCP/IP timeouts.
test=
(DESCRIPTION =
(CONNECT_TIMEOUT=60)
(ADDRESS_LIST=
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP1>)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = <publicIP2>)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <dbservice.subnetname.dbvcn.oraclevcn.com>)
)
)
For a 1-node DB System or 2-node RAC DB System, regardless of how you connect to the DB
System, before you use OS authentication to connect to a database (for example, sqlplus /
as sysdba) be sure to set the ORACLE_UNQNAME variable. Otherwise, commands that
require the TDE wallet will result in the error ORA-28365: wallet is not open.
Note that this is not an issue when using a TNS connection because ORACLE_UNQNAME is
automatically set in the database CRS resource.
If the DB System’s root volume becomes full, you might lose the ability to SSH to the system
(the SSH command will fail with permission denied errors). Before you copy a large amount
of data to the root volume, for example, to migrate a database, use the dbcli create-
dbstorage command to set up storage on the system’s NVMe drives and then copy the
database files to that storage. For more information, see Setting Up Storage on the
DB System.
WHAT NEXT?
Before you begin updating your DB System, review the information in Updating a DB System.
For information about setting up an Enterprise Manager console to monitor your databases,
see Monitoring a Database.
Updating a DB System
Review the following information before you begin updating a DB System.
Do not add interactive commands such as oraenv, or commands that might return an error or
warning message, to the .bash_profile file for the grid or oracle users. Adding such
For a 1-node DB System or 2-node RAC DB System, do not remove or modify the following
firewall rules in /etc/sysconfig/iptables:
l The firewall rules for ports 1521, 7070, and 7060 allow the Database service to manage
the DB System. Removing or modifying them can result in the Database Service no
longer operating properly.
l The firewall rules for 169.254.0.2:3260 and 169.254.0.3:80 prevent non-root users from
escalating privileges and tampering with the system’s boot volume and boot process.
Removing or modifying these rules can allow non-root users to modify the system's
boot volume.
OS Updates
For customers with Oracle Linux Premier Support, Oracle recommends that you register with
the Unbreakable Linux Network (ULN) and use the ULN Yum repositories and Oracle Ksplice to
apply updates, including OS security updates. For more information, see Ksplice and About
Ksplice Uptrack.
For customers without Oracle Linux Premier Support, DB Systems include the Oracle Yum
repository files, but all channels are disabled. You can enable specific channels, for example,
public_ol6_latest, and then use the Oracle Yum security plug-in to apply OS security updates.
For more information, see Downloading the Oracle Public Yum Repository Files and Installing
and Using the Yum Security Plugin.
Important!
For information about applying Oracle database patches to a 1-node DB System, see Patching
a DB System.
OS Kernel Updates
A DB System boots from a network drive, which requires additional actions when you update
the OS kernel. For more information, see OS Kernel Updates.
Configuring a DB System
This topic provides information to help you configure your DB System.
Oracle recommends that you run a Network Time Protocol (NTP) daemon on your 1-node
DB Systems to keep system clocks stable during rebooting. If you need information about an
NTP daemon, see Setting Up “NTP (Network Time Protocol) Server” in RHEL/CentOS 7.
Oracle recommends that you configure NTP on both nodes in a 2-node RAC DB System to
synchronize time across the nodes. If you do not configure NTP, then Oracle Clusterware
configures and uses the Cluster Time Synchronization Service (CTSS), and the cluster time
might be out-of-sync with applications that use NTP for time synchronization.
For information about configuring NTP on a version 12c database, see Setting Network Time
Protocol for Cluster Time Synchronization. For a version 11g database, see Network Time
Protocol Setting.
l For version 12c databases, if you don’t want your tablespaces encrypted, you can set
the ENCRYPT_NEW_TABLESPACES database initialization parameter to DDL.
l On a 1- or 2-node RAC DB System, you can use the dbcli update-tdekey command to
update the master encryption key for a database.
l You must create and activate a master encryption key for any PDBs that you create.
After creating or plugging in a new PDB on a 1- or 2-node RAC DB System, use the
dbcli update-tdekey command to create and activate a master encryption key for the
PDB. Otherwise, you might encounter the error ORA-28374: typed master key not
found in wallet when attempting to create tablespaces in the PDB. In a multitenant
environment, each PDB has its own master encryption key which is stored in a single
keystore used by all containers. For more information, see "Overview of Managing a
Multitenant Environment" in the Oracle Database Administrator’s Guide.
l For information about encryption on Exadata DB Systems, see Using Tablespace
Encryption in Exadata Cloud Service.
For detailed information about database encryption, see the Oracle Database Security White
Papers.
Patching a DB System
Note
This topic explains how to perform patching operations on DB Systems and database homes
by using the Console, API, or the database CLI (DBCLI).
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Database Admins Manage Database Systems lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for databases, see Details for the Database Service.
l Ensure that the DB System's cloud network (VCN) is configured with an internet
gateway. Note that the network traffic between the DB System and Object Storage does
not leave the cloud and never reaches the public internet. For more information, see
Connectivity to the Internet.
l Because patching a system requires a reboot, plan to run the operations at a time when
they will have minimal impact on users. To avoid system interruption, consider
implementing a high availability strategy such as Oracle Data Guard. For more
information, see Using Oracle Data Guard with the Database CLI.
l Oracle recommends that you back up your database and test the patch on a test system
before you apply the patch. For information about backing up the databases, see
Backing Up a Database.
l If a patch operation fails, you can view details about the operation by accessing the logs
on a host. For help with troubleshooting a failed operation, contact Oracle Support. See
Contacting Support.
l You must patch a DB System before you patch the databases within that system.
You can use the Console to view the history of patch operations on a DB System or an
individual database, apply patches, and monitor the status of an operation.
You can also use the pre-check action to ensure your DB System or database home has met
the requirements for the patch you want to apply.
2. Find the DB System on which you want to perform a patch operation, and click its name
to display details about it.
3. Under Resources, click Patches.
4. Review the list of patches.
5. Click the Actions icon ( ) for the patch you are interested in, and then click one of
the following actions:
l Pre-check: Check for any prerequisites to make sure that the patch can be
successfully applied.
l Apply: Performs the pre-check, and then applies the patch.
6. Confirm when prompted.
7. In the list of patches, click the patch name to display its patch request and monitor the
progress of the patch operation.
While a patch is being applied, the patch's status displays as Applying and the DB
System's status displays as Updating. If the operation completes successfully, the
patch's status changes to Applied and the DB System's status changes to Available.
6. Click the Actions icon ( ) for the patch you are interested in, and then click one of
the following actions:
l Pre-check: Check for any prerequisites to make sure that the patch can be
successfully applied.
l Apply: Performs the pre-check, and then applies the patch.
7. Confirm when prompted.
8. In the list of patches, click the patch name to display its patch request and monitor the
progress of the patch operation.
While a patch is being applied, the patch's status displays as Applying and the
database's status displays as Updating. If the operation completes successfully, the
patch's status changes to Applied and the database's status changes to Available.
Each patch history entry represents an attempted patch operation and indicates whether the
operation was successful or failed. You can retry a failed patch operation. Repeating an
operation results in a new patch history entry.
Patch history views in the Console do not show patches that were applied by using command
line tools like DBCLI or the Opatch utility.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
DB Systems:
l ListDbSystemPatches
l ListDbSystemPatchHistoryEntries
l GetDbSystemPatch
l GetDbSystemPatchHistoryEntry
l UpdateDbSystem
Databases:
l ListDbHomePatches
l ListDbHomePatchHistoryEntries
l GetDbHomePatch
l GetDbHomePatchHistoryEntry
l UpdateDbHome
For the complete list of APIs for the Database service, see Database Service API.
This topic explains how to use the command line interface on the DB System to patch a DB
System. Patches are available from the Oracle Cloud Infrastructure Object Storage service.
You'll use the dbcli commands to download and apply patches to some or all of the
components in your system.
PREREQUISITES
For connecting to the DB System via SSH, you'll need the path to private key associated with
the public key used when the DB System was launched.
You also need the public or private IP address of the DB System. Use the private IP address to
connect to the DB System from your on-premises VPN, or from within the virtual cloud
network (VCN). This includes connecting from a host located on-premises connecting through
a VPN to your VCN, or from another host in the same VCN. Use the DB System's public IP
address to connect to the system from outside the cloud (with no VPN). You can find the IP
addresses in the Oracle Cloud Infrastructure Console on the Database page.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
4. Wait for the update job to complete successfully. Check the status of the job by using
the dbcli list-jobs command.
[root@dbsys ~]# dbcli list-jobs
ID Description Created
Status
---------------------------------------- ------------------------------ -------------------------
---------- ----------
dc9ce73d-ed71-4473-99cd-9663b9d79bfd dbcli patching January 18, 2017 10:19:34
AM PST Success
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. Display the installed patch versions by using the dbcli describe-component command. If
the Available Version column indicates a version number for a component, you should
update the component.
[root@dbsys ~]# dbcli describe-component
System Version
---------------
12.1.2.10.0
4. Display the latest patch versions available in Object Storage by using the dbcli describe-
latestpatch command.
[root@dbsys ~]# dbcli describe-latestpatch
componentType availableVersion
--------------- --------------------
gi 12.1.0.2.161018
db 11.2.0.4.161018
db 12.1.0.2.161018
oak 12.1.2.10.0
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
Job details
----------------------------------------------------------------
ID: 9a02d111-e902-4e94-bc6b-9b820ddf6ed8
Description: Server Patching
Status: Running
Created: January 19, 2017 9:37:11 AM PST
Message:
5. Verify that the server components were updated successfully by using the dbcli
describe-component command. The Available Version column should indicate
update-to-date.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. Get the ID of the database home by using the dbcli list-dbhomes command.
[root@dbsys ~]# dbcli list-dbhomes
ID Name DB Version Home Location
------------------------------------ ----------------- ---------- ------------------------------
------------
b727bf80-c99e-4846-ac1f-28a81a725df6 OraDB12102_home1 12.1.0.2
/u01/app/orauser/product/12.1.0.2/dbhome_1
4. Update the database home components by using the dbcli update-dbhome command
Job details
----------------------------------------------------------------
ID: 31b38f67-f993-4f2e-b7eb-5bccda9901ae
Description: DB Home Patching: Home Id is b727bf80-c99e-4846-ac1f-28a81a725df6
Status: Success
Created: January 20, 2017 10:08:48 AM PST
Message:
Patch conflict check January 20, 2017 10:08:58 AM PST January 20, 2017
10:12:00 AM PST Success
db upgrade January 20, 2017 10:12:00 AM PST January 20, 2017
10:22:17 AM PST Success
6. Verify that the database home components were updated successfully by using the dbcli
describe-component command. The Available Version column should indicate
update-to-date.
Note
If you are required to apply an interim patch (previously known as a "one-off" patch) to fix a
specific defect, follow the procedure in this section. You use the Opatch utility to apply an
interim patch to a database home.
5. Set the Oracle home environment variable to point to the target Oracle home.
sudo su - oracle
export ORACLE_HOME=/u02/app/oracle/product/12.1.0.2/dbhome_1
6. Change to the directory where you placed the patch, and unzip the patch.
cd <work_dir_where_opatch_is stored>
unzip p26543344_122010_Linux-x86-64.zip
7. Change to the directory with the unzipped patch, and check for conflicts.
cd 26543344
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
11. If the readme indicates that the patch has a sqlpatch component, run the datapatch
command against each database.
Before you run datapatch, ensure that all pluggable databases (PDBs) are open. To open
a PDB, you can use SQL*Plus to execute "ALTER PLUGGABLE DATABASE <pdb_name>
OPEN READ WRITE;" against the PDB.
$ORACLE_HOME/OPatch/datapatch
Creating a Database
Note
When you launch a Bare Metal DB System, an initial database is created in that system. You
can create additional databases in that DB System at any time by using the Console or the
API. You can create an empty database or reproduce a database by using a standalone
backup. A standalone backup is a full backup from a database that's been terminated. See
Recovering a Database from Object Storage.
The database edition will be the edition supported by the DB System in which the database is
created, and each new database is created in a separate database home.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Database Admins Manage Database Systems lets the
specified group do everything with databases and related Database resources.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for databases, see Details for the Database Service.
Note
The database that you create will be the same edition as the
initial database in the DB System.
1. Open the Console, click Database, and then choose your Compartment.
2. In the list of DB Systems, find the DB System in which you want to create the database,
and then click its name to display details about it.
3. Click Create Database.
4. In the Create Database dialog, enter the following:
l Database Name: The name for the database. The database name must begin
with an alphabetic character and can contain a maximum of eight alphanumeric
characters. Special characters are not permitted.
l Database Version: The version of the database. You can mix database versions
on the DB System, but not editions.
l Admin Password: A strong password for SYS, SYSTEM, TDE wallet, and PDB
Admin. The password must be nine to thirty characters and contain at least two
uppercase, two lowercase, two numeric, and two special characters. The special
characters must be _, #, or -.
l Confirm Admin Password: Re-enter the database admin password.
l Automatic Backup: Check the check box to enable automatic incremental
backups for this database.
l Database Workload: Select the workload type that best suits your application.
o Online Transactional Processing (OLTP) configures the database for a
transactional workload, with a bias towards high volumes of random data
access.
o Decision Support System (DSS) configures the database for a decision
support or data warehouse workload, with a bias towards large data
scanning operations.
5. (Optional) Click Show Advanced Options, and specify the character set and/or
national character set for this database. The defaults are AL32UTF8 and AL16UTF16,
respectively.
Click Create Database.
When the database creation is complete, the status changes from Provisioning to Available.
To terminate a database
You'll get the chance to back up the database prior to terminating it. This creates a standalone
backup that can be used to create a database later. Oracle recommends that you create this
final backup for any production (non-test) database.
Note
You cannot terminate a database that is assuming the primary role in a Data Guard
association. To terminate it, you can switch it over to the standby role.
1. Open the Console, click Database and then choose your Compartment.
2. In the list of DB Systems, find the DB System that contains the database you want to
terminate, and then click its name to display details about it.
3. In the list of databases, find the database, click the Actions icon ( ) and then click
Terminate.
4. In the confirmation dialog, indicate whether you want to back up the database before
terminating it, and type the name of the database to confirm the termination.
5. Click Terminate Database.
The database's status indicates Terminating.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l GetDbHome
l CreateDbHome
l DeleteDbHome
For the complete list of APIs for the Database service, see Database Service API.
Monitoring a Database
This topic explains how to set up an:
Each console is a web-based database management tool inside the Oracle database. You can
use the console to perform basic administrative tasks such as managing user security,
memory, and storage, and view performance information.
Some of the procedures below require permission to create or update security lists. For more
information about security list policies, see Security Lists.
You must also update the security list and iptables for the DB System as described later in this
topic.
When you enable the console, you'll set the port for the console. The procedure below uses
port 5500, but each additional console enabled on the same DB System will have a different
port.
For example:
SQL> exec DBMS_XDB_CONFIG.SETHTTPSPORT(5500);
l To determine the port for a previously enabled console, use the following
command.
select dbms_xdb_config.getHttpsPort() from dual;
For example:
SQL> select dbms_xdb_config.getHttpsPort() from dual;
DBMS_XDB_CONFIG.GETHTTPSPORT()
------------------------------
5500
3. Return to the operating system by typing exit and then confirm that the listener is
listening on the port:
lsnrctl status | grep HTTP
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=xxx.us.oracle.com)(PORT=5500))(Security=(my_wallet_
directory=/u01/app/oracle/admin/prod/xdb_wallet))(Presentation=HTTP)(Session=RAW))
1. SSH to one of the nodes in the DB System, log in as opc, sudo to the grid user.
[opc@dbsysHost1 ~]$ sudo su - grid
[grid@dbsysHost1 ~]$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base has been set to /u01/app/grid
2. Get the location of the wallet directory, shown in red below in the command output.
[grid@dbsysHost1 ~]$ lsnrctl status | grep xdb_wallet
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=dbsysHost1.sub04061528182.dbsysapril6.oraclevcn.com)
(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet))
(Presentation=HTTP)(Session=RAW))
3. Return to the opc user, switch to the oracle user, and change to the wallet directory.
[opc@dbsysHost1 ~]$ sudo su - oracle
[oracle@dbsysHost1 ~]$ cd /u01/app/oracle/admin/dbsys12_phx3wm/xdb_wallet
7. Important! Repeat the steps above on the other node in the cluster.
1. From a web browser, connect to the console using the following URL format:
https://<ip_address:<port>/em
Use the private IP address to connect to the DB System from your on-premises VPN, or
from within the virtual cloud network (VCN). This includes connecting from a host
located on-premises connecting through a VPN to your VCN, or from another host in the
same VCN.
Use the DB System's public IP address to connect to the system from outside the cloud
(with no VPN).
You can find the IP addresses in the Oracle Cloud Infrastructure Console on the
Database page.
2. A login page is displayed and you can log in with any valid database credentials.
To learn more about EM Express, see Introduction to Oracle Enterprise Manager Database
Express.
Note
By default, the Enterprise Manager Database Control console is not enabled on version
11.2.0.4 databases. You can enable the console:
l when you create a database by using the dbcli create-database with the -co parameter
l for an existing database as described here.
Port 1158 is the default port used for the first console enabled on the DB System, but each
additional console enabled on the DB System will have a different port.
Note
To determine the port for the Enterprise Manager Database Control console
1. SSH to the DB System, log in as opc, and sudo to the oracle user.
sudo su - oracle
. oraenv
<provide the database SID at the prompt>
1. From a web browser, connect to the console using the following URL format:
https://<ip_address>:<port>/em
To learn more about Enterprise Manager Database Control, see Introduction to Oracle
Enterprise Manager Database Control.
You'll create SSH keys on each node and copy the key to the other node, so that each node
has the keys for both nodes. The following procedure uses the sample names node1 and
node2.
2. Create a directory called .ssh, set its permissions, create an RSA key, and add the
public key to the authorized_keys file.
mkdir .ssh
chmod 755 .ssh
ssh-keygen -t rsa
cat id_rsa.pub > authorized_keys
SYSMAN_PWD=<admin password>
CLUSTER_NAME=<cluster name> <=== to get the cluster name, run: $GI_HOME/bin/cemutlo -n
ASM_OH=$GI_HOME
ASM_SID=+ASM1
ASM_PORT=<asm listener port>
ASM_USER_NAME=ASMSNMP
ASM_USER_PWD=<admin password>
2. On node1, run Enterprise Manager Configuration Assistant (EMCA) using the emca.rsp
file as input.
$ORACLE_HOME/bin/emca -config dbcontrol db -repos create -cluster -silent -respFile <location of
response file above>
3. On node2, configure the console so the agent in node1 reports to the console in node1,
and the agent in node2 reports to the console in node2.
$ORACLE_HOME/bin/emca -reconfig dbcontrol -silent -cluster -EM_NODE <node2 host> -EM_NODE_LIST
<node2 host> -DB_UNIQUE_NAME <db_unique_name>
-SERVICE_NAME <db_unique_name>.<db_domain>
1. On each node, edit iptables to open the console's port as described in Opening Ports on
the DB System.
2. Update the security list for the console's port as described in Updating the Security List
for the DB System.
For important information about critical firewall rules, see Essential Firewall Rules.
(If necessary, you can restore the original file by using the command iptables-
restore < /tmp/iptables.orig.)
4. Dynamically add a rule to iptables to allow inbound traffic on the console port, as shown
in the following sample. Change the port number and comment as needed.
[root@dbsys ~]# iptables -I INPUT 8 -p tcp -m state --state NEW -m tcp --dport 5500 -j ACCEPT -m
comment --comment "Required for EM Express.”
The change takes effect immediately and will remain in effect when the node is
rebooted.
7. Update the DB System's security list as described in Updating the Security List for the
DB System.
Review the list of ports in Opening Ports on the DB System and for every port you open in
iptables, update the security list used for the DB System, or create a new security list.
Note that port 1521 for the Oracle default listener is included in iptables, but should also be
added to the security list.
For detailed information about creating or updating a security list, see Security Lists.
Backing Up a Database
Backing up your DB System is a key aspect of any Oracle database environment. You can
store backups in the cloud or in local storage. Each backup destination has advantages,
disadvantages, and requirements that you should consider, as described below.
Local Storage
l Backups are stored locally in the Fast Recovery Area of the DB System.
l Durability: Low
l Availability: Medium
l Back Up and Recovery Rate: High
l Advantages: Optimized back up and fast point-in-time recovery.
l An internet gateway is not required.
l Disadvantages: If the DB System becomes unavailable, the backup is also unavailable.
Currently, Oracle Cloud Infrastructure does not provide the ability to attach block storage
volumes to a DB System, so you cannot back up to network attached volumes.
Note
This topic explains how to work with backups managed by Oracle Cloud Infrastructure. You do
this by using the Console or the API. (For unmanaged backups, you can use RMAN or dbcli,
and you must create and manage your own Object Storage buckets for backups. See Backing
Up to Object Storage Using RMAN.)
Warning
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
If you're new to policies, see Getting Started with Policies and Common Policies.
PREREQUISITES
The DB System's cloud network (VCN) must be configured with an internet gateway.
Oracle recommends that you update the backup subnet's security list to disallow any access
from outside the subnet and allow egress traffic for TCP port 443 (https) on CIDR Ranges
129.146.0.0/16 (Phoenix region) 129.213.0.0/16 (Ashburn region), and 130.61.0.0/16
(Frankfurt region). For more information, see Security Lists.
Note that the network traffic between the DB System and Object Storage does not leave the
cloud and never reaches the public internet. For more information, see Connectivity to the
Internet.
Your DB System must have connectivity to the applicable Swift endpoint for Object Storage.
See https://cloud.oracle.com/infrastructure/storage/object-storage/faq for information about
the Swift endpoints to use.
You can use the Console to enable automatic incremental backups, create full backups on
demand, and view the list of managed backups for a database. The Console also allows you to
delete full backups.
Notes
The list of backups you see in the Console does not include
any unmanaged backups (backups created directly by using
RMAN or dbcli).
All backups are encrypted with the same master key used for
Transparent Data Encryption (TDE) wallet encryption.
The database and DB System must be in an “Available” state for a backup operation to run
successfully. Oracle recommends that you avoid performing actions that could interfere with
availability (such as patching and Data Guard operations) while a backup operation is in
progress. If an automatic backup operation fails, the Database service retries the operation
during the next day’s backup window. If an on-demand full backup fails, you can try the
operation again when the DB System and database availability is restored.
When you enable the Automatic Backup feature, the service creates daily incremental
backups of the database to Object Storage. The first backup created is a level 0 backup. Then,
level 1 backups are created every day until the next weekend. Every weekend, the cycle
repeats, starting with a new level 0 backup. The automatic backup process can run at any
time within the daily backup window (between midnight and 6:00 am UTC). Automatic
incremental backups are retained in Object Storage for 30 days, after which time, they are
automatically deleted.
You can create a full backup of your database at any time unless your database is assuming
the standby role in a Data Guard association.
Standalone Backups
When you terminate a DB System or a database, all of its resources are deleted, along with
any automatic backups. Full backups remain in Object Storage as standalone backups. You
can use a standalone backup to create a new database.
1. Open the Console, click Database, and then choose your Compartment.
A list of DB Systems is displayed.
2. Find the DB System where the database is located, and click the system name to display
details about it.
A list of databases is displayed.
3. Find the database for which you want to enable or disable automatic backups, and click
its name to display details about it. The details indicate whether automatic backups are
enabled.
4. Under Resources, click Backups.
Note
1. Open the Console, click Database, and then choose your Compartment.
A list of DB Systems is displayed.
2. Find the DB System where the database is located, and click the system name to display
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l ListBackups
l GetBackup
l CreateBackup
l DeleteBackup
l UpdateDatabase - To enable and disable automatic backups.
For the complete list of APIs for the Database service, see Database Service API.
WH AT'S NEXT?
Note
This topic explains how to back up to the local Fast Recovery Area on a Bare Metal DB System
by using the dbcli command line interface that's available on 1- and 2-node DB Systems.
Some sample dbcli commands are provided below. For complete command syntax, see the
Oracle Database CLI Reference.
Note
You'll use the dbcli commands to create a backup configuration, associate the backup
configuration with the database, initiate the backup operation, and then review the backup
job.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
The example above uses full parameter names for demonstration purposes, but you can
abbreviate the parameters like this:
dbcli create-backupconfig -n prodbackup -d disk -w 5
4. Get the ID of the database you want to back up by using the dbcli list-databases
command.
5. Get the ID of the backup configuration by using the dbcli list-backupconfigs command.
[root@dbbackup backup]# /opt/oracle/dcs/bin/dbcli list-backupconfigs
ID Name DiskRecoveryWindow
BackupDestination createTime
---------------------------------------- -------------------- ----- ------ ----------------------
-------------
6. Associate the backup configuration ID with the database ID by using the dbcli update-
database command.
[root@dbsys ~]# dbcli update-database --backupconfigid 78a2a5f0-72b1-448f-bd86-cf41b30b64ee --
dbid 71ec8335-113a-46e3-b81f-235f4d1b6fde
{
"jobId" : "2b104028-a0a4-4855-b32a-b97a37f5f9c5",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467775842977,
"description" : "update database id:71ec8335-113a-46e3-b81f-235f4d1b6fde",
"updatedTime" : 1467775842978
}
You can view details about the update job by using the dbcli describe-job command and
specifying the job ID from the dbcli update-database command output, for example:
dbcli describe-job --jobid 2b104028-a0a4-4855-b32a-b97a37f5f9c5
7. Initiate the database backup by using the dbcli create-backup command. The backup
operation is performed immediately.
You can view details about the back up job by using the dbcli describe-job command and
specifying the job ID from the dbcli create-backup command output, for example:
dbcli describe-job --jobid d6c9edaa-fc80-40a9-bcdd-056430cdc56c
{
"jobId" : "65ce79fe-4ef4-4d7d-8020-e56a5390026d",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "July 6, 2016 23:06:11 PM UTC",
"description" : "Creating a report for database 71ec8335-113a-46e3-b81f-235f4d1b6fde",
"updatedTime" : "July 6, 2016 23:06:11 PM UTC"
After the backup command completes, the database backup files are available in the Fast
Recovery Area on the DB System.
WH AT'S NEXT?
Recovering a Database
For information on restoring a database on a 1- or 2-node RAC DB System, see the following
topics:
l Recovering a Database from the Oracle Cloud Infrastructure Classic Object Store
Note
This topic explains how to recover a database from a backup stored in Object Storage. The
service is a secure, scalable, on-demand storage solution in Oracle Cloud Infrastructure. For
information on using Object Storage as a backup destination, see Backing Up to Oracle Cloud
Infrastructure Object Storage.
You can recover a database using the Console, API, or by using RMAN.
PREREQUISITES
The DB System's cloud network (VCN) must be configured with an internet gateway. Note that
the network traffic between the DB System and Object Storage does not leave the cloud and
never reaches the public internet. For more information, see Connectivity to the Internet.
You can use the Console to restore the database from a backup in the Object Storage that was
created by using the Console or the API. You can restore to the last known good state of the
database, or you can specify a point in time or an existing System Change Number (SCN).
Notes
The list of backups you see in the Console does not include
any unmanaged backups (backups created directly by using
RMAN or dbcli).
For Bare Metal DB Systems, you can also create a new database by using a standalone
backup.
To restore a database
1. Open the Console, click Database, and then choose your Compartment.
A list of DB Systems is displayed.
2. Find the DB System where the database is located, and click the system name to display
details about it.
A list of databases is displayed.
3. Find the database you want to restore, and click its name to display details about it.
4. Click Restore.
A list of backups is displayed.
5. Click the Actions icon ( ) for the backup you are interested in, and then click
Restore.
6. Select one of the following options, and click Restore Database:
l Restore to the latest: Restores the database to the last known good state with
the least possible data loss.
l Restore to the timestamp: Restores the database to the timestamp specified.
l Restore to SCN: Restores the database using the SCN specified. This SCN must
be valid.
TIP
you review the RMAN logs on the host and fix any issues before reattempting to restore
the database.
Note
l When you create a database from a standalone backup, you can choose a different DB
System and compartment. However, the availability domain must be the same as
where the backup is hosted.
l The target DB System must already exist. The process does not automatically create
one for you.
l The database you create must be the same database type as the database from which
the backup was taken. For example, if you are using a backup of a 1-node database,
then the DB System you select as your target must also be a 1-node DB System.
l The version of the target DB System must be the same or higher than the version of the
standalone backup.
1. Open the Console, click Database, and then click Standalone Backups.
2. In the list of standalone backups, find the backup you want to use to create the
database.
3. Click the Actions icon ( ) for the backup you are interested in, and then click
Create Database.
Note
l Database Admin Password: A strong password for SYS, SYSTEM, PDB Admin,
and TDE wallet for the new database. The password must be nine to thirty
characters and contain at least two uppercase, two lowercase, two numeric, and
two special characters. The special characters must be _, #, or -.
l Confirm Database Admin Password: Re-enter the database admin password.
l Password for Transparent Data Encryption (TDE) Wallet or RMAN
Encryption: Enter either the TDE wallet password or the RMAN encryption
password for decrypting the backup, whichever is applicable.
5. Click Create Database from Backup.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l ListBackups
l GetBackup
l RestoreDatabase
l CreateDbHome - For creating a Bare Metal DB System database from a standalone
backup.
For the complete list of APIs for the Database service, see Database Service API.
This topic explains how to recover a Recovery Manager (RMAN) backup stored in Object
Storage.
PREREQUISITES
l A new DB System to restore the database to (see assumptions below). For more
information, see Managing DB Systems.
l The Oracle Database Cloud Backup Module must be installed on the DB System. For
more information, see Installing the Backup Module on the DB System.
ASSUMPTIONS
l A new DB System has been created to host the restored database and no other database
exists on the new DB System. It is possible to restore to a DB System that has existing
databases, but that is beyond the scope of this topic.
l The original DB System is lost and all that remains is the latest RMAN backup.
l The Oracle Wallet and/or encryption keys used by the original database at the time of
the last backup is available.
l The RMAN backup contains a copy of the control file and spfile as of the most recent
backup as well as all of the datafile and archivelog backups needed to perform a
complete database recovery.
l An RMAN catalog will not be used during the restore.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. You can use an existing empty database home or create a new one for the restore. Use
the applicable commands to help you complete this step.
If you will be using an existing database home:
l Use the dbcli list-dbhomes command to list the database homes.
[root@dbsys ~]# dbcli list-dbhomes
ID Name DB Version Home Location
---------------------------------------- -------------------- ---------- -----------------
----------------------------
2e743050-b41d-4283-988f-f33d7b082bda OraDB12102_home1 12.1.0.2
/u01/app/oracle/product/12.1.0.2/dbhome_1
l Use the dbcli list-databases command to ensure the database home is not
associated with any database.
If necessary, use the dbcli create-dbhome command to create a database home for the
restore.
4. Use the dbcli create-dbstorage to set up directories for DATA, RECO, and REDO storage.
The following example creates 10GB of ACFS storage for the rectest database.
[root@dbsys ~]# dbcli create-dbstorage --dbname rectest --dataSize 10 --dbstorage ACFS
Note
1. SSH to the DB System, log in as opc, and then become the oracle user.
sudo su - oracle
2. Create an entry in /etc/oratab for the database. Use the same SID as the original
database.
db1:/u01/app/oracle/product/12.1.0.2/dbhome_1:N
3. Set the ORACLE_HOME and ORACLE_SID environment variables using the oraenv utility.
. oraenv
4. Obtain the DBID of the original database. This can be obtained from the file name of the
controlfile autobackup on the backup media. The file name will include a string that
contains the DBID. The typical format of the string is c-DDDDDDDDDDDD-YYYYMMDD-NN
where DDDDDDDDDDDD is the DBID, YYYYMMDD is the date the backup was created, and NN
is a sequence number to make the file name unique. The DBID in the following
examples is 1508405000. Your DBID will be different.
Use the following curl syntax to perform a general query of Object Storage. The
parameters in red are the same parameters you specified when installing the backup
module as described in Installing the Backup Module on the DB System.
curl -u '<user_ID>.com:<swift_password>' -v
https://swiftobjectstorage.<region>.oraclecloud.com/v1/<tenant_name>
For example:
curl -u '[email protected]:1cnk!d0++ptETd&C;tHR' -v
https://swiftobjectstorage.<region>.oraclecloud.com/v1/mycompany
To get the DBID from the control file name, use the following syntax:
curl -u '<user_id>.com:<swift_password>' -v
https://swiftobjectstorage.<region>.oraclecloud.com/v1/<tenant_name>/<bucket_name>?prefix=sbt_
catalog/c-
For example:
curl -u '[email protected]:1cnk!d0++ptETd&C;tHR' -v
https://swiftobjectstorage.<region>.oraclecloud.com/v1/mycompany/dbbackups/?prefix=sbt_catalog/c-
5. Run RMAN and connect to the target database. There is no need to create a pfile or
spfile or use a backup controlfile. These will be restored in the following steps.
Note that the target database is (not started). This is normal and expected at this
point.
rman target /
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
7. Run the STARTUP NOMOUNT command. If the server parameter file is not available,
RMAN attempts to start the instance with a dummy server parameter file. The ORA-
01078 and LRM-00109 errors are normal and can be ignored.
RMAN> STARTUP NOMOUNT
mkdir -p /u01/app/oracle/admin/db1/adump
10. If block change tracking was enabled on the original database, create the directory for
the block change tracking file. This will be a directory under db_create_file_dest.
Search the spfile for the name of the directory.
strings ${ORACLE_HOME}/dbs/spfile${ORACLE_SID}.ora | grep db_create_file_dest
*.db_create_file_dest='/u02/app/oracle/oradata/db1'
11. Restart the instance with the restored server parameter file.
STARTUP FORCE NOMOUNT;
12. Restore the controlfile from the RMAN autobackup and mount the database.
set controlfile autobackup format for device type sbt to '%F';
run {
allocate channel c1 device type sbt PARMS 'SBT_LIBRARY=/home/oracle/lib/libopc.so, SBT_PARMS=
(OPC_PFILE=/home/oracle/config)';
restore controlfile from autobackup;
alter database mount;
}
14. RMAN will recover using archived redo logs until it can't find any more. It is normal for
an error similar to the one below to occur when RMAN has applied the last archived redo
log in the backup and can't find any more logs.
unable to find archived log
archived log thread=1 sequence=29
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/28/2016 00:57:35
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 29 and
starting SCN of 2349563
The recovery is complete. The database will have all of the committed transactions as of the
last backed up archived redo log.
Note
To initiate the recovery, you'll use the dbcli create-recovery command and specify the
recovery type parameter (either --recoverytype or just -t). You can specify the following
types of recovery:
The dbcli create-recovery attempts to perform a full recovery of the database. For
information on performing a partial recovery (datafile, tablespace and PDB), see the Oracle
Database Backup Recovery Guide for version 12.2, 12.1, or 11.2.
PREREQUISITES
l The backup must have been created with the dbcli create-backup command.
l If the database is configured with Transparent Data Encryption (TDE), make sure the
password-based and autologin TDE wallets are present in the following location:
/opt/oracle/dcs/commonstore/wallets/tde/<db_unique_name>
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. Find the ID of database you want to recover by using the dbcli list-databases command.
You'll need the ID for the following step.
[root@dbsys ~]# dbcli list-databases
ID DB Name DB Version CDB Class Shape
Storage Status
---------------------------------------- ---------- ---------- ---------- -------- -------- -----
----- ----------
5a3e980b-e0fe-4909-9628-fcefe43b3326 prod 12.1.0.2 true OLTP odb1 ACFS
Configured
4. Initiate the recovery by using the dbcli create-recovery command and specifying the
database ID, recovery type parameter (-t), and any parameter required for the
recover type, like the time stamp or system change number.
The following example initiates a complete recovery.
[root@dbsys ~]# dbcli create-recovery --dbid 5a3e980b-e0fe-4909-9628-fcefe43b3326 --recoverytype
Latest
{
"jobId" : "c9f81228-2ce9-43b4-88f6-b260d398cf06",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 18:20:47 PM UTC",
"description" : "Create recovery for database id :5a3e980b-e0fe-4909-9628-fcefe43b3326",
"updatedTime" : "August 08, 2016 18:20:47 PM UTC"
}
Job details
----------------------------------------------------------------
ID: c9f81228-2ce9-43b4-88f6-b260d398cf06
Description: Create recovery for database id :5a3e980b-e0fe-4909-9628-fcefe43b3326
Status: Success
Created: August 8, 2016 6:20:47 PM UTC
Message:
You can also check the database restore report logs on the DB System at:
/opt/oracle/dcs/log/<nodename>/rman/bkup/<db_unique_name>
Recovering a Database from the Oracle Cloud Infrastructure Classic Object Store
Note
This topic explains how to recover a database using a backup created by the Oracle Database
Backup Module and stored in Oracle Cloud Infrastructure Object Storage Classic.
PREREQUISITES
l The service name, identity name, container, user name, and password for Oracle Cloud
Infrastructure Object Storage Classic.
l The backup password if password-based encryption was used when backing up to Object
Storage Classic.
l The source database ID, database name, database unique name (required for setting up
storage).
l If the source database is configured with Transparent Data Encryption (TDE), you'll
need a backup of the wallet and the wallet password.
l Tnsnames to setup for any database links.
l The output of Opatch lsinventory for the source database Oracle_home, for reference.
l A copy of the sqlpatch directory from the source database home. This is required for
rollback in case the target database does not include these patches.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. Use the dbcli create-dbstorage to set up directories for DATA, RECO, and REDO storage.
The following example creates 10GB of ACFS storage for the tdetest database.
Note
4. Use the dbcli list-dbstorages command to list the storage ID. You'll need the ID for the
next step.
[root@dbsys ~]# dbcli list-dbstorages
ID Type DBUnique Name Status
---------------------------------------- ------ -------------------- ----------
9dcdfb8e-e589-4d5f-861a-e5ba981616ed Acfs tdetest Configured
5. Use the dbcli describe-dbstorage command with the storage ID from the previous step
to list the DATA, RECO and REDO locations.
[root@dbsys ~]# dbcli describe-dbstorage --id 9dcdfb8e-e589-4d5f-861a-e5ba981616ed
DBStorage details
----------------------------------------------------------------
ID: 9dcdfb8e-e589-4d5f-861a-e5ba981616ed
DB Name: tdetest
DBUnique Name: tdetest
DB Resource ID:
Storage Type: Acfs
DATA Location: /u02/app/oracle/oradata/tdetest
RECO Location: /u03/app/oracle/fast_recovery_area/
REDO Location: /u03/app/oracle/redo/
State: ResourceState(status=Configured)
Created: August 24, 2016 5:25:38 PM UTC
UpdatedTime: August 24, 2016 5:25:53 PM UTC
6. Note the DATA, RECO and REDO locations. You'll need them later to set the db_create_
file_dest, db_create_online_log_dest, and db_recovery_file_dest parameters
for the database.
CHOOSING AN ORACLE_HOME
Decide which ORACLE_HOME to use for the database restore and then switch to that home
with the correct ORACLE_BASE, ORACLE_HOME, and PATH settings. The ORACLE_HOME must
not already be associated with a database.
To get a list of existing ORACLE_HOMEs and to ensure that the ORACLE_HOME is empty, use
the dbcli list-dbhomes and the dbcli list-databases commands, respectively. To create a new
ORACLE_HOME, use the dbcli create-dbhome command.
Skip this section if the source database is not configured with TDE.
3. Copy the ewallet.p12 file from the source database to the directory you created in the
previous step.
4. On the target host, make sure that $ORACLE_HOME/network/admin/sqlnet.ora
contains the following line:
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=
(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))
Add the line if it doesn't exist in the file. (The line might not be there if this is a new
home and no database has been created yet on this host.)
5. Create the autologin wallet from the password-based wallet to allow auto-open of the
wallet during restore and recovery operations.
For version 12c, use the ADMINISTER KEY MANAGEMENT command:
$cat create_autologin_12.sh
#!/bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <dbuniquename><remotewalletlocation>"
exit 1;
fi
mkdir /opt/oracle/dcs/commonstore/wallets/tde/$1
cp $2/ewallet.p12* /opt/oracle/dcs/commonstore/wallets/tde/$1
rm -f autokey.ora
echo "db_name=$1" > autokey.ora
autokeystoreLog="autologinKeystore_`date +%Y%m%d_%H%M%S_%N`.log"
echo "Enter Keystore Password:"
read -s keystorePassword
echo "Creating AutoLoginKeystore -> "
sqlplus "/as sysdba" <<EOF
spool $autokeystoreLog
set echo on
startup nomount pfile=autokey.ora
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE
FROM KEYSTORE '/opt/oracle/dcs/commonstore/wallets/tde/$1' -- Keystore location
IDENTIFIED BY "$keystorePassword";
shutdown immediate;
EOF
total 20
The backup module JAR file is included on the DB System but you need to install it.
1. SSH to the DB System, log in as opc, and then become the oracle user.
ssh -i <path to SSH key used when launching the DB System> opc@<DB System IP address or hostname>
sudo su - oracle
2. Change to the directory that contains the backup module opc_install.jar file.
cd /opt/oracle/oak/pkgrepos/orapkgs/oss/<version>/
3. Use the command syntax described in Installing the Oracle Database Cloud Backup
Module to install the backup module.
Set the following environment variables for the RMAN and SQL*Plus sessions for the
database:
ORACLE_HOME=<path of Oracle Home where the database is to be restored>
ORACLE_SID=<database instance name>
ORACLE_UNQNAME=<db_unique_name in lower case>
NLS_DATE_FORMAT="mm/dd/yyyy hh24:mi:ss"
ALLOCATING AN RMAN SBT CHANNEL
For each restore operation, allocate an SBT channel and set the SBT_LIBRARY parameter to
the location of the libopc.so file and the OPC_FILE parameter to the location of the opc_
sbt.ora file, for example:
ALLOCATE CHANNEL c1 DEVICE TYPE sbt MAXPIECESIZE 2 G FORMAT '%d_%I_%U' PARMS 'SBT_
LIBRARY=/tmp/oss/libopc.so ENV=(OPC_PFILE=/<ORACLE_HOME>/dbs/opc_sbt.ora)';
(For more information about these files, see Files Created When the Backup Module is
Installed.)
Make sure that decryption is turned on for all the RMAN restore sessions.
set decryption wallet open identified by <keystore password>;
For more information, see Providing Password Required to Decrypt Encrypted Backups.
RESTORING S PFILE
The following sample shell script restores the spfile. Set the $dbID variable to the dbid of the
database being restored. By default, spfile is restored to $ORACLE_
HOME/dbs/spfile<sid>.ora.
rman target / <<EOF
3. Remove the control_files parameter. The Oracle Managed Files (OMF) parameters
will be used to create the control file.
alter system reset control_files scope=spfile;
4. Restart the database in nomount mode using the newly added parameters.
shutdown immediate
startup nomount
Modify the following sample shell script for your environment to restore the control file. Set
the $dbID variable to the dbid of the database being restored. Set SBT_LIBRARY to the
location specified in the -libDir parameter when you installed the Backup Module. Set OPC-
PFILE to the location specified in the -configFile parameter, which defaults to ORACLE_
HOME/dbs/opcSID.ora.
rman target / <<EOF
exit;
EOF
1. Preview and validate the backup. The database is now mounted and RMAN should be
able to locate the backup from the restored controlfile. This step helps ensure that the
list of archivelogs is present and that the backup components can be restored .
In the following examples, modify SBT_LIBRARY and OPC_PFILE as needed for your
environment.
rman target / <<EOF
Review the output and if there are error messages, investigate the cause of the
problem.
2. Redirect the restore using set newname to restore the data files in OMF format and use
switch datafile all to allow the control file to update with the new data file copies.
rman target / <<EOF
This recovery will attempt to use the last available archive log backup and then fail with
an error, for example:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/20/2016 12:09:02
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 22 and
starting SCN of 878327
3. To complete the incomplete recovery, run a recovery using the sequence number and
thread number shown in the RMAN-06054 message, for example:
Recover database until sequence 22 thread 1;
1. Make sure the database COMPATIBLE parameter value is acceptable. For a 11.2
database, the minimum compatibility value is 11.2.0.4. For a 12c database, the
minimum compatibility value is 12.1.0.2. If the value is less than the minimum, the
database cannot be registered until you upgrade the database compatibility.
2. Verify that the database has registered with the listener and the service name.
lsnrctl services
3. Make sure the password file was restored or created for the new database.
ls -ltr $ORACLE_HOME/dbs/orapw<oracle sid>
If the file does not exist, create it using the orapwd utility.
The command output should indicate read write mode. The dbcli register-database
command will attempt to run datapatch, which requires read write mode. If there are
PDBs, they should also be in read write mode to ensure that datapatch runs on them.
5. From oracle home on the restored database, use the following command verify the
connection to SYS:
conn sys/<password>@//<hostname>:1521/<database service name>
This connection is required to register the database later. Fix any connection issues
before continuing.
6. Make sure the database is running on spfile by using the SQL*Plus command.
SHOW PARAMETERS SPFILE
7. (Optional) If you would like to manage the database backup with the dbcli command line
interface, you can associate a new or existing backup configuration with the migrated
database when you register it or after you register it. A backup configuration defines the
backup destination and recovery window for the database. Use the following commands
to create, list, and display backup configurations:
l dbcli update-backupconfig
l dbcli list-backupconfigs
l dbcli describe-backupconfig
8. Copy the folder $ORACLE_HOME/sqlpatch from source database to the target database.
This will enable the dbcli register-database command to roll back any conflicting
patches.
Note
The dbcli register-database command registers the restored database to the dcs-agent so it
can be managed by the dcs-agent stack.
Note
As the root user, use the dbcli register-database command to register the database on
the DB System, for example:
[root@dbsys ~]# dbcli register-database --dbclass OLTP --dbshape odb1 --servicename tdetest --
syspassword
Password for SYS:
{
"jobId" : "317b430f-ad5f-42ae-bb07-13f053d266e2",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 05:55:49 AM EDT",
"description" : "Database service registration with db service name: tdetest",
"updatedTime" : "August 08, 2016 05:55:49 AM EDT"
}
UPDATING TNSNAMES.ORA
Check the tnsnames.ora in the backup location, check the database links used in the cloned
database, and then add any relevant connection strings to the cloned database file at
$ORACLE_HOME/network/admin/tnsnames.ora.
For version 11.2 databases, the sqlpatch application is not automated, so any one-off patches
applied to the source database that are not part of the installed PSU must be rolled back
manually in the target database. After registering the database, execute the catbundle.sql
script and then the postinstall.sql script with the corresponding PSU patch (or the overlay
patch on top of the PSU patch), as described below.
1. On the DB System, use the dbcli list-dbhomes command to find the PSU patch
number for the version 11.2 database home. In the following sample command output,
the PSU patch number is the second number in the DB Version column:
[root@dbsys ~]# dbcli list-dbhomes
ID Name DB Version
Home Location Status
------------------------------------ ----------------- ------------------------------------- --
--------------------------------------- ----------
59d9bc6f-3880-4d4f-b5a6-c140f16f8c64 OraDB11204_home1 11.2.0.4.160719 (23054319, 23054359)
/u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
(The first patch number, 23054319 in the example above, is for the OCW component in
the database home.)
2. Find the overlay patch, if any, by using the lsinventory command. In the following
example, patch number 24460960 is the overlay patch on top of the 23054359 PSU
patch.
$ $ORACLE_HOME/OPatch/opatch lsinventory
...
Installed Top-level Products (1):
Bugs fixed:
23513711, 23065323, 21281607, 24006821, 23315889, 22551446, 21174504
This patch overlays patches:
23054359
This patch needs patches:
23054359
as prerequisites
4. Apply the sqlpatch, using the overlay patch number from the previous step, for
example:
SQL> connect / as sysdba
SQL> @$ORACLE_HOME/sqlpatch/24460960/postinstall.sql
exit
Note
After the database is restored and registered on the DB System, use the following checklist to
verify the results and perform any post-restore customizations.
Note
This topic explains how to use the Console to manage Data Guard associations in your DB
System.
For complete information on Oracle Data Guard, see the Data Guard Concepts and
Administration documentation on the Oracle Document Portal.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
If you're new to policies, see Getting Started with Policies and Common Policies.
Prerequisites
A Data Guard implementation requires two DB Systems, one containing the primary database
and one containing the standby database.
l Both DB Systems must be in the same compartment, and they must be the same shape.
l The database versions and editions must be identical. Data Guard does not support
Standard Edition. (Active Data Guard requires Enterprise Edition - Extreme
Performance.)
l Both DB Systems must use the same VCN, and port 1521 must be open.
l Important! Properly configure the security list ingress and egress rules for the subnets
of both DB Systems in the Data Guard association to allow TCP traffic to flow between
the applicable ports.
For example, if the subnet of the primary DB System uses the source CIDR 10.0.0.0/24
and the subnet of the failover DB System uses the source CIDR 10.0.1.0/24, create
rules as shown in the following example.
Note
Source: 10.0.1.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521
Egress Rules:
Destination: 10.0.1.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521
Source: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521
Egress Rules:
Destination: 10.0.0.0/24
IP Protocol: TCP
Source Port Range: All
Destination Port Range: 1521
Allows: TCP traffic for ports: 1521
For information about creating and editing rules, see Security Lists.
Oracle recommends that the DB System of the standby database be in a different availability
domain from the DB System of the primary database to improve availability and disaster
recovery.
If you don't already have DB Systems with the databases that will assume the primary and
standby roles, create them as described in Managing DB Systems. A new DB System includes
an initial database.
Oracle Data Guard ensures high availability, data protection, and disaster recovery for
enterprise data. The Oracle Cloud Infrastructure Database Data Guard implementation
requires two databases, one in a primary role and one in a standby role. The two databases
compose a Data Guard association. Most of your applications access the primary database.
The standby database is a transactionally consistent copy of the primary database.
Data Guard maintains the standby database by transmitting and applying redo data from the
primary database. If the primary database becomes unavailable, you can use Data Guard to
switch the standby database to the primary role.
S WITCHOVER
A switchover reverses the primary and standby database roles. Each database continues to
participate in the Data Guard association in its new role. A switchover ensures no data loss.
You can use a switchover before you perform planned maintenance on the primary database.
FAILOVER
A failover transitions the standby database into the primary role after the existing primary
database fails or becomes unreachable. A failover might result in some data loss when you
use Maximum Performance protection mode.
REINSTATE
Reinstates a database into the standby role in a Data Guard association. You can use the
reinstate command to return a failed database into service after correcting the cause of
failure.
The Console allows you to enable a Data Guard association between existing databases,
change the role of a database in a Data Guard association using either a switchover or a
failover operation, and reinstate a failed database that has been repaired.
To reinstate a database
After you fail over a primary database to its standby, the standby assumes the primary role
and the old primary is identified as a disabled standby. After you correct the cause of failure,
you can reinstate the failed database as a functioning standby for the current primary by using
its Data Guard association.
Note
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l CreateDataGuardAssociation
l GetDataGuardAssociation
l ListDataGuardAssociations
l SwitchoverDataGuardAssociation
l FailoverDataGuardAssociation
l ReinstateDataGuardAssociation
For the complete list of APIs for the Database service, see Database Service API.
Note
This topic explains how to use the database CLI to set up Data Guard with Fast-Start Failover
(FSFO) in Oracle Cloud Infrastructure. The following sections explain how to prepare the
primary and standby databases, and then configure Data Guard to transmit redo data from the
primary database and apply it to the standby database.
Note
This topic assumes that you are familiar with Data Guard and
FSFO. To learn more about them, see documentation at the
Oracle Document Portal.
Prerequisites
To perform the procedures in this topic, you'll need the following information for the primary
and standby databases.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. To find the db_name (or oracle_sid) and db_uniqueName, run the dbcli list-
databases -j command.
[root@dbsys ~]# dbcli list-databases -j
[ {
"id" : "80ad855a-5145-4f8f-a08f-406c5e4684ff",
"name" : "dbtst",
"dbName" : "dbtst",
"databaseUniqueName" : "dbtst_phx1cs",
"dbVersion" : "12.1.0.2",
"dbHomeId" : "2efe7af7-0b70-4e9b-ba8b-71f11c6fe287",
"instanceOnly" : false,
.
.
.
4. To find the oracle home directory (or database home), run the dbcli list-dbhomes
command. If there are multiple database homes on the DB System, use the one that
matches the "dbHomeId" in the dbcli list-databases -j command output shown
above.
[root@dbtst ~]# dbcli list-dbhomes
ID Name DB Version
Home Location Status
---------------------------------------- -------------------- -----------------------------------
----- --------------------------------------------- ----------
2efe7af7-0b70-4e9b-ba8b-71f11c6fe287 OraDB12102_home1 12.1.0.2.160719 (23739960,
23144544) /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
33ae99fe-5413-4392-88da-997f3cd24c0f OraDB11204_home1 11.2.0.4.160719 (23054319,
23054359) /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
If you don't already have a primary DB System, create one as described in Managing DB
Systems. The DB System will include an initial database. You can create additional databases
by using the dbcli create-database command available on the DB System.
Note
1. Create a standby DB System as described in Managing DB Systems and wait for the
DB System to finish provisioning and become available.
You can create the standby DB System in a different availability domain from the
primary DB System for availability and disaster recovery purposes (this is strongly
recommended). You can create the standby DB System in the primary DB System's
cloud network so that both systems are in a single, routable network.
2. SSH to the DB System.
ssh -i <private_key_path> opc@<db_system_ip_address>
3. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
4. The DB System will include an initial database, but you'll need to create a standby
database by using the dbcli create-database command with the --instanceonly
parameter. This parameter creates only the database storage structure and starts the
database in nomount mode (no other database files are created).
When using --instanceonly, both the --dbname and --adminpassword parameters are
required and they should match the dbname and admin password of the primary
database to avoid confusion.
The following sample command prompts for the admin password and then creates a
storage structure for a database named dbname.
[root@dbsys ~]# dbcli create-database --dbname <same as primary dbname> --databaseUniqueName
<different from primary uniquename> --instanceonly --adminpassword
If you are using version 12c pluggable databases, also specify the --cdb parameter.
For complete command syntax, see dbcli create-database.
5. Wait a few minutes for the dbcli create-database command to create the standby
database.
You can use the dbcli list-jobs command to verify that the creation job ran
successfully, and then the dbcli list-databases command verify that the database is
configured.
To prepare the primary DB System, you'll need to configure static listeners, update
tnsnames.ora, and configure some database settings and parameters.
1. SSH to the primary DB System, log in as the opc or root user, and sudo to the grid
OS user.
sudo su - grid
The sample above assumes that name resolution is working and that the <standby_
server>.<domain> is resolvable at the primary database. You can also use the private IP
address of the standby server if the IP addresses are routable within a single cloud network
(VCN).
2. Identify the Broker configuration file names and locations. The commands used for this
depend on the type of database storage. If you're not sure of the database storage type,
use the dbcli list-databases command on the DB System.
For ACFS database storage, use the following commands to set the Broker configuration
files.
For ASM database storage, use the following commands to set the Broker configuration
files.
SQL> alter system set dg_broker_config_file1='+DATA/<Primary db_unique_name>/dr1<db_unique_
name>.dat';
SQL> alter system set dg_broker_config_file2='+DATA/<Primary db_unique_name>/dr2<db_unique_
name>.dat';
5. Add Standby Redo Logs (SRLs), based on the Online Redo Logs (ORLs). On a newly
launched DB System, there will be three ORLs of size 1073741824, so create four SRLs
of the same size.
You can use the query below to determine the number and size (in bytes) of the ORLs.
SQL> select group#, bytes from v$log;
GROUP# BYTES
---------- ----------
1 1073741824
2 1073741824
3 1073741824
There should be only one member in the SRL group (by default, a DB System is created
with only one member per SRL group). To ensure this, you can name the file with the
following syntax.
alter database add standby logfile thread 1 group 4 (<logfile name with full path>) size
1073741824, group 5(<logfile name with full path>) size 1073741824 ...
For ASM/OMF configurations, the above command uses the diskgroup instead of <logfile
name with full path>.
alter database add standby logfile thread 1 group 4 (+RECO) size 1073741824, group 5(+RECO) size
1073741824 ...
9. Perform a single switch redo log to activate archiving if database is newly created. (At
least one log must be archived prior to running the RMAN duplicate.)
SQL> alter system switch logfile;
Before you prepare the standby database, make sure the database home on the standby is the
same version as on the primary. (If the primary and standby databases are both newly
created with the same database version, the database homes will be the same.) If it is not,
create a database home that is the same version. You can use the dbcli list-dbhomes
command to verify the versions and the dbcli create-dbhome command to create a new
database home as needed.
To prepare the standby DB System, you'll need to configure static listeners, update
tnsnames.ora, configure TDE Wallet, create a temporary password file, verify connectivity,
run RMAN DUPLICATE, enable FLASHBACK, and then create the database service.
1. SSH to the standby DB System, log in as the opc or root user, and sudo to the grid
OS user.
sudo su - grid
4. Verify that the static listeners are available. The sample output below is for database
version 12.1.0.2. Note that the ...status UNKNOWN messages are expected at this
point.
$ lsnrctl status
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 29-SEP-2016 21:09:19
Uptime 0 days 0 hr. 0 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12.1.0.2/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/dg2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.0.1.24)(PORT=1521)))
Services Summary...
Service "dg2_phx2hx.oratst.org" has 1 instance(s).
Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
Service "dg2_phx2hx_DGMGRL.oratst.org" has 1 instance(s).
Instance "dg2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
As the oracle user, add the standby database net service name to $ORACLE_
HOME/network/admin/tnsnames.ora. $ORACLE_HOME is the database home where the
standby database is running.
<Primary db_unique_name> =
(DESCRIPTION =
(SDU=65535)
(ADDRESS = (PROTOCOL = TCP)(HOST = <primary_server>.<domain>) (PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <primary db_unique_name).<primary db_domain>)
)
)
<Standby db_unique_name> =
(DESCRIPTION =
(SDU=65535)
(ADDRESS = (PROTOCOL = TCP)(HOST = <standby_server>.<domain>) (PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <standby db_unique_name>.<db_domain>)
)
)
Copy the TDE wallet files from the primary DB System to standby DB System using SCP. The
following sample command assumes the SCP command is being run by the oracle OS user and
that the private key for oracle has been created and exists on the host where SCP is being run.
$ scp -i <private key> primary_server:/opt/oracle/dcs/commonstore/wallets/tde/<primary db_unique_name>/*
standby_server:/opt/oracle/dcs/commonstore/wallets/tde/<standby db_unique_name>
As the oracle user, create the following directory for database version 11.2.0.4. This step is
optional for version 12.2.0.1 and version 12.1.0.2.
[oracle@dbsys ~]$ mkdir -pv /u03/app/oracle/redo/<standby db_unique_name uppercase>/controlfile
As the oracle user, create the following directory to use as the audit file destination.
[oracle@dbsys ~]$ mkdir -p /u01/app/oracle/admin/<db_name>/adump
The password must be the same as the admin password of the primary database. Otherwise,
the RMAN duplicate step below will fail with: RMAN-05614: Passwords for target and
auxiliary connections must be the same when using active duplicate.
The database needs to be started in nomount mode with no spfile specified, but the
original init file contains an spfile parameter which will prevent the RMAN duplicate step
from working.
4. The dbcli create-database --instanceonly command used earlier opens the
standby database as a primary in read/write mode, so the database needs to be brought
down before proceeding to the nomount step below.
$ sqlplus / as sysdba
SQL> shutdown immediate
1. Make sure that the listener port 1521 is open in the security list(s) used for the primary
and standby DB Systems. For more information, see Updating the Security List for the
DB System.
2. From the primary database, connect to standby database.
$ sqlplus sys/<password>@<standby net service name> as sysdba
If the primary database is large, you can allocate additional channels to improve
performance. For a newly installed database, one channel typically runs the database
duplication in a couple of minutes.
Make sure that there are no errors generated by the RMAN DUPLICATE command. If errors
occur, restart the database using the init.ora file (not spfile) in case it is generated under
$ORACLE_HOME/dbs as part of RMAN DUPLICATE.
In the following examples, use lowercase for the <Standby db_unique_name> unless
otherwise specified.
Oracle recommends creating a database service for the standby database by using srvctl.
7. Stop the database and start the standby database by using srvctl.
$ srvctl start database -d <standby db_unique_name>
Perform the following steps to complete the configuration of Data Guard and enable redo
transport from the primary database and redo apply in the standby database.
1. Run the dgmgrl command line utility from either the primary or standby DB System and
connect to the primary database using sys credentials.
DGMGRL> connect sys/<sys password>@<primary tns alias>
2. Create the Data Guard configuration and identify for the primary and standby
databases.
DGMGRL> create configuration mystby as primary database is <primary db_unique_name> connect
identifier is <primary tns alias>;
add database <standby db_unique_name> as connect identifier is <standby tns alias> maintained
as physical;
4. Verify that Data Guard setup was done properly. Run the following SQL in both the
primary and standby databases.
SQL> select FORCE_LOGGING, FLASHBACK_ON, OPEN_MODE, DATABASE_ROLE, SWITCHOVER_STATUS, DATAGUARD_
BROKER, PROTECTION_MODE from v$database;
5. Verify that Data Guard processes are initiated in the standby database.
SQL> select PROCESS,PID,DELAY_MINS from V$MANAGED_STANDBY;
7. Verify that the Data Guard configuration is working. Specifically, make sure redo
shipping and redo apply are working and that the standby is not unreasonably lagging
behind the primary.
DGMGRL> show configuration verbose
DGMGRL> show database verbose <standby db_unique_name>
DGMGRL> show database verbose <primary db_unique_name>
Any discrepancies, errors, or warnings should be resolved. You can also run a
transaction on the primary and verify that it's visible in the standby.
The best practice for high availability and durability is to run the primary, standby, and
observer in separate availability domains. The observer determines whether or not to failover
to a specific target standby database. The server used for observer requires the Oracle Client
Administrator software, which includes the Oracle SQL NET and Broker.
1. Configure TNS alias names for both the primary and standby databases as described
previously, and verify the connection to both databases.
2. Change protection mode to either maxavailability or maxperformance (maxprotection is
not supported for FSFO).
To enable maxavailability:
DGMGRL> edit database <standby db_unique_name> set property 'logXptMode'='SYNC';
DGMGRL> edit database <primary db_unique_name> set property 'logXptMode'='SYNC';
DGMGRL> edit configuration set protection mode as maxavailability;
To enable maxperformance:
DGMGRL> edit configuration set protection mode as maxperformance;
DGMGRL> edit database <standby db_unique_name> set property 'logXptMode'='ASYNC';
DGMGRL> edit database <primary db_unique_name> set property 'logXptMode'='ASYNC';
6. Start the observer from Broker (it will run in the foreground, but can also be run in the
background).
DGMGRL> start observer
8. Always test failover in both directions to ensure that everything is working as expected.
Verify that FSFO is running properly by performing a shutdown abort of the primary
database.
The observer should start the failover to the standby database. If protection mode is set
to maxprotection, some loss of data can occur, based on the FastStartFailoverLaglimit
value.
Note
The dbcli command line interface (CLI) is available on 1-node and 2-node RAC DB Systems.
After you connect to the DB System, you can use the CLI to perform tasks such as creating
Oracle database homes and databases.
Renamed CLIs
On April 20, 2017, the odacli CLI was renamed to dbcli and
the odadmcli administrative CLI was renamed to dbadmcli.
DB Systems launched on or after that date will include the
renamed CLIs.
Operational Notes
l The CLI commands and most parameters are case sensitive and should be typed as
shown. A few parameters are not case sensitive, as indicated in the parameter
descriptions, and can be typed in uppercase or lowercase.
Syntax
where:
The remainder of this topic contains syntax and other details about the commands.
Occasionally, new commands are added to the dbcli CLI and other commands are updated to
support new features. You can use the following command to update the CLI.
Use the cliadm update-dbcli command to update the dbcli CLI with the latest new and
updated commands.
Note
SYNTAX
PARAMETERS
E XAMPLE
Backup Command
Before you can back up a database by using the dbcli create-backup command, you'll need to:
Once a database is associated with a backup configuration, you can use the database's default
backup schedule to run back up jobs automatically. For more information, see Schedule
Commands.
SYNTAX
PARAMETERS
-i --dbid The ID of the database to back up. Use the dbcli list-
databases command to get the database's ID.
E XAMPLE
Backupconfig Commands
A backup configuration determines the backup destination and recovery window for database
backups. You create the backup configuration and then associate it to a database by using the
dbcli update-database command.
Once a database is associated with a backup configuration, you can use the database's default
backup schedule to run back up jobs automatically. For more information, see Schedule
Commands.
l dbcli create-backupconfig
l dbcli list-backupconfigs
l dbcli describe-backupconfig
l dbcli update-backupconfig
l dbcli delete-backupconfig
Use the dbcli create-backupconfig command to create a backup configuration that defines
the backup destination and recovery windows.
SYNTAX
PARAMETERS
-o -- The ID of the object store that contains the endpoint and
objectstoreswiftId credentials for the Oracle Cloud Infrastructure Object
Storage service. Use the dbcli list-objectstoreswifts
command to get the object store ID. Use the dbcli
create-objectstoreswift command to create an object
store.
-w --recoverywindow The number of days for which backups and archived redo
logs are maintained. The interval always ends with the
current time and extends back in time for the number of
days specified.
E XAMPLE
"jobId" : "4e0e6011-db53-4142-82ef-eb561658a0a9",
"tags" : [ ],
"reportLevel" : "Info",
"updatedTime" : "November 18, 2016 20:21:25 PM UTC"
} ],
"createTimestamp" : "November 18, 2016 20:21:25 PM UTC",
"description" : "create backup config:dbbkcfg1",
"updatedTime" : "November 18, 2016 20:21:25 PM UTC"
}
Use the dbcli list-backupconfigs command to list all the backup configurations in the DB
System.
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli describe-backupconfig command to show details about a specific backup
configuration.
SYNTAX
PARAMETERS
E XAMPLE
SYNTAX
PARAMETERS
-o -- The ID of the object store that contains the endpoint and
objectstoreswiftId credentials for the Oracle Cloud Infrastructure Object
Storage service. Use the dbcli list-objectstoreswifts
command to get the object store ID. Use the dbcli
create-objectstoreswift command to create an object
store.
E XAMPLE
The following command updates the recovery window for a backup configuration:
[root@dbsys ~]# dbcli update-backupconfig -i ccdd56fe-a40b-4e82-b38d-5f76c265282d -w 5
{
"jobId" : "0e849291-e1e1-4c7a-8dd2-62b522b9b807",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1468153731699,
"description" : "update backup config: dbbkcfg1",
"updatedTime" : 1468153731700
}
SYNTAX
PARAMETERS
-i --id The backup configuration ID to delete. Use the dbcli list-
backupconfigs command to get the ID.
E XAMPLE
Backupreport Commands
l dbcli create-backupreport
l dbcli list-backupreports
l dbcli delete-backupreport
l dbcli describe-backupreport
Use the dbcli create-backupreport command to create a back up report for a database.
SYNTAX
PARAMETERS
-i --dbid The ID of the database to create the report for. This ID is
different from the DBID. Use the dbcli list-databases
command to get the database ID.
E XAMPLE
The following command create a detailed backup report for the specified database:
[root@dbsys ~]# dbcli create-backupreport -i a892ced1-be04-436e-8e82-bf0a89109164 -w detailed
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli delete-backupreport command to permanently delete one or more backup
reports.
SYNTAX
PARAMETERS
-d --dbid (Optional) The ID of the database for which you want to delete
backup reports. This ID is different from the DBID. Use the
dbcli list-databases command to get the database ID.
Requires the --numofday parameter.
-i --reportid The ID of a specific backup report to delete. Use the dbcli
list-backupreports command to get the ID.
E XAMPLE
Use the dbcli describe-backupreport command to display details about a backup report.
SYNTAX
PARAMETERS
-i --id The ID of the backup report. Use the dbcli list-
backupreports command to get the ID.
E XAMPLE
Bmccredential Commands
The following commands are available to manage credentials configurations, which are
required for downloading DB System patches from the Oracle Cloud Infrastructure Object
Storage service. For more information, see Patching a DB System.
l dbcli create-bmccredential
l dbcli list-bmccredentials
l dbcli describe-bmccredential
l dbcli delete-bmccredential
l dbcli update-bmccredential
Note
PREREQUISITES
Before you can create a credentials configuration, you'll need these items:
l An RSA key pair in PEM format (minimum 2048 bits). See How to Generate an API
Signing Key.
l The fingerprint of the public key. See How to Get the Key's Fingerprint.
l Your tenancy's OCID and user name's OCID. See Where to Get the Tenancy's OCID and
User's OCID.
Then you'll need to upload the public key in the Console. See How to Upload the Public Key.
SYNTAX
PARAMETERS
https://objectstorage.<region>.oraclecloud.com
-f --fingerPrint The public key fingerprint. You can find the fingerprint in
the Console by clicking your user name in the upper right
corner and then clicking User Settings. The fingerprint
looks something like this:
-f 61:9e:52:26:4b:dd:46:dc:8c:a8:05:6b:9f:0a:30:d2
-k --privateKey The path to the private key file in PEM format, for example:
-k /root/.ssh/privkey
-t --tenantOcid Your tenancy OCID. You can find the OCID in the Console,
in the footer at the bottom of any page. The tenancy OCID
looks something like this:
ocid1
.
tenancy
.
oc1..aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq
E XAMPLE
{
"jobId" : "f8c80510-b717-4ee2-a47e-cd380480b28b",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "December 26, 2016 22:46:38 PM PST",
"resourceList" : [ ],
"description" : "BMC Credentials Creation",
"updatedTime" : "December 26, 2016 22:46:38 PM PST"
}
Use the dbcli list-bmccredentials command to list the credentials configurations on the
DB System.
SYNTAX
PARAMETERS
E XAMPLE
SYNTAX
PARAMETERS
-i --id The ID for the credentials configuration. Use the dbcli list-
bmccredentials command to get the ID.
E XAMPLE
The following command displays details about the specified credentials configuration:
[root@dbsys ~]# dbcli describe-bmccredential -i 09f9988e-eed5-4dde-8814-890828d1c763
SYNTAX
PARAMETERS
-i --id The ID for the credentials configuration. Use the dbcli list-
bmccredentials command to get the ID.
E XAMPLE
SYNTAX
PARAMETERS
-i --id The ID for the credentials configuration. Use the dbcli list-
bmccredentials command to get the ID.
-k --privateKey The path to the private key file in PEM format, for example:
-k /root/.ssh/privkey
E XAMPLE
"jobId" : "6e95a69e-cf73-4e51-a444-c7e4b9631c27",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "January 19, 2017 12:01:10 PM PST",
"resourceList" : [ ],
"description" : "Update BMC Credentials of object 6f921b29-61b6-48f4-889a-ce9270621945",
"updatedTime" : "January 19, 2017 12:01:10 PM PST"
Component Command
Note
Use the dbcli describe-component command to show the installed and available patch
versions for the server, storage, and/or database home components in the DB System.
This command requires a valid Object Storage credentials configuration. Use the dbcli create-
bmccredential command to create the configuration if you haven't already done so. If the
configuration is missing or invalid, the command fails with the error: Failed to connect to
the object store. Please provide valid details.
For more information about updating the CLI, creating the credentials configuration, and
applying patches, see Patching a DB System.
SYNTAX
PARAMETERS
-d --dbhomes (Optional) Lists the installed and available patch versions for
only the database home components.
-s --server (Optional) Lists the installed and available patch versions for
only the server components.
E XAMPLE
The following command to show the current component versions and the available patch
versions in the object store:
[root@dbsys ~]# dbcli describe-component
System Version
---------------
12.1.2.10.0
Cpucore Commands
l dbcli list-cpucores
l dbcli describe-cpucore
l dbcli update-cpucore
Use the dbcli list-cpucores command to display the CPU core update history on the local
node.
SYNTAX
PARAMETERS
E XAMPLE
The following command displays the CPU core update history on the local node:
[root@dbsys ~]# dbcli list-cpucores
Use the dbcli describe-cpucore command to list the current core count on the local node.
SYNTAX
PARAMETERS
E XAMPLE
The following command displays the current CPU core count on the local node:
[root@dbsys ~]# dbcli describe-cpucore
Note
SYNTAX
PARAMETERS
E XAMPLE
{
"jobId" : "cf9ba39c-fd5d-47b0-b60d-8338e8b87e0d",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467954691791,
"description" : "CPU cores service update",
"updatedTime" : 1467954691791
}
Database Commands
l dbcli create-database
l dbcli delete-database
l dbcli describe-database
l dbcli list-databases
l dbcli register-database
l dbcli update-database
Use the dbcli create-database command to create a new database. You can create a
database with a new or existing Oracle Database Home.
It takes a few minutes to create the database. After you run the dbcli create-database
command, you can use the dbcli list-jobs command to check the status of the database
creation job.
Once the database is created, you can use the dbcli list-databases -j command to see
additional information about the database.
Note
SYNTAX
dbcli create-database -dh <db_home_id> -cl {OLTP|DSS|IMDB} -n <db_name> -u <unique_name> -bi <bkup_
config_id> -m -s <db_shape> -r {ACFS|ASM} -y {SI|RAC|RACOne} -io -d <pdb_admin_user> -p <pdb> -g n -
ns <nlcharset> -cs <charset> -l <language> -dt<territory> -v <version> [-co|-no-co] [-h] [-j]
PARAMETERS
-cs --characterset (Optional) Defines the character set for the database.
The default is AL32UTF8.
-cl --dbclass Defines the database class. The options are OLTP,
DSS, or IMDB. The default is OLTP. For Enterprise
Editions, all three classes are supported. For
Standard Edition, only OLTP is supported.
-dt --dbterritory (Optional) Defines the territory for the database. The
default is AMERICA.
-ns --nlscharacterset (Optional) Defines the national character set for the
database. The default is AL16UTF16.
USAGE NOTES
l You cannot mix Oracle Database Standard Edition and Enterprise Edition databases on
the same DB System. (You can mix supported database versions on the DB System, but
not editions.)
l When --dbhomeid is not provided, the dbcli create-database command will create a
new Oracle Database Home.
l When --dbhomeid is provided, the dbcli create-database command creates the
database using the existing Oracle Home. Use the dbcli list-dbhomes command to
get the dbhomeid.
l Oracle Database 12.1 is supported on both Oracle Automatic Storage Management
(ASM) and Oracle ASM Cluster file system (ACFS). The default is Oracle ACFS.
l Oracle Database 11.2 is supported on Oracle ACFS.
l Each database is configured with its own Oracle ACFS file system for the datafiles and
uses the following naming convention: /u02/app/db user/oradata/db name. The
default size of this mount point is 100G.
l Online logs are stored in the /u03/app/db user/redo/ directory.
l The Oracle Fast Recovery Area (FRA) is located in the /u03/app/db user/fast_
recovery_area directory.
E XAMPLES
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 03:59:22 AM EDT",
"description" : "Database service creation with db name: crmdb",
"updatedTime" : "August 08, 2016 03:59:22 AM EDT"
}
SYNTAX
PARAMETERS
E XAMPLE
SYNTAX
PARAMETERS
E XAMPLE
DB Home details
----------------------------------------------------------------
ID: b727bf80-c99e-4846-ac1f-28a81a725df6
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/orauser/product/12.1.0.2/dbhome_1
Created: Jun 2, 2016 10:19:23 AM
Use the dbcli list-databases command to list all databases on the DB System.
SYNTAX
PARAMETERS
E XAMPLE
"nlsCharacterset" : "AL16UTF16",
"dbTerritory" : "AMERICA",
"dbLanguage" : "AMERICAN"
},
"dbConsoleEnable" : false,
"backupConfigId" : null,
"backupDestination" : "NONE",
"cloudStorageContainer" : null,
"state" : {
"status" : "CONFIGURED"
},
"createTime" : "November 09, 2016 17:23:05 PM UTC",
"updatedTime" : "November 09, 2016 18:00:47 PM UTC"
}
Use the dbcli register-database command to register a database that has been migrated
to Oracle Cloud Infrastructure. The command registers the database to the dcs-agent so it can
be managed by the dcs-agent stack.
Note
SYNTAX
PARAMETERS
-bi -- Defines the backup configuration ID. Use the dbcli list-
backupconfigid backupconfigs command to get the ID.
-c --dbclass Defines the database class. The options are OLTP, DSS, or
IMDB. The default is OLTP. For Enterprise Editions, all three
classes are supported. For Standard Edition, only OLTP is
supported.
-sn --servicename Defines the database service name used to build the
EZCONNECT string for connecting to the database. The
connect string format is hostname:port/servicename.
E XAMPLE
The following command registers the database with the specified database class, service
name, and database sizing template.
[root@dbsys ~]# dbcli register-database -c OLTP -s odb1 -sn crmdb.example.com -p
Password for SYS:
{
"jobId" : "317b430f-ad5f-42ae-bb07-13f053d266e2",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 05:55:49 AM EDT",
"description" : "Database service registration with db service name: crmdb.example.com",
"updatedTime" : "August 08, 2016 05:55:49 AM EDT"
}
SYNTAX
PARAMETERS
-bi -- Defines the backup configuration ID. Use the dbcli list-
backupconfigid backupconfigs command to get the ID.
E XAMPLE
Dbhome Commands
l dbcli create-dbhome
l dbcli describe-dbhome
l dbcli delete-dbhome
l dbcli list-dbhomes
l dbcli update-dbhome
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli describe-dbhome command to display Oracle Database Home details.
SYNTAX
PARAMETERS
E XAMPLE
The following output is an example of using the display Oracle Database Home details
command.
[root@dbsys ~]# dbcli describe-dbhome -i 52850389-228d-4397-bbe6-102fda65922b
DB Home details
----------------------------------------------------------------
ID: 52850389-228d-4397-bbe6-102fda65922b
Name: OraDB12102_home1
Version: 12.1.0.2
Home Location: /u01/app/oracle/product/12.1.0.2/dbhome_1
Created: June 29, 2016 4:36:31 AM UTC
Use the dbcli delete-dbhome command to delete a database home from the DB System.
SYNTAX
PARAMETERS
Use the dbcli list-dbhomes command to display a list of Oracle Home directories.
SYNTAX
PARAMETER
E XAMPLE
Use the dbcli update-dbhome command to apply the DBBP bundle patch to a database home.
For more information about applying patches, see Patching a DB System.
SYNTAX
PARAMETERS
E XAMPLE
The following commands update the database home and show the output from the update job:
[root@dbsys ~]# dbcli update-dbhome -i e1877dac-a69a-40a1-b65a-d5e190e671e6
{
"jobId" : "493e703b-46ef-4a3f-909d-bbd123469bea",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "January 19, 2017 10:03:21 AM PST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is e1877dac-a69a-40a1-b65a-d5e190e671e6",
"updatedTime" : "January 19, 2017 10:03:21 AM PST"
Job details
----------------------------------------------------------------
ID: 493e703b-46ef-4a3f-909d-bbd123469bea
Description: DB Home Patching: Home Id is e1877dac-a69a-40a1-b65a-d5e190e671e6
Status: Running
Created: January 19, 2017 10:03:21 AM PST
Message:
Dbstorage Commands
l dbcli list-dbstorages
l dbcli describe-dbstorage
l dbcli create-dbstorage
l dbcli delete-dbstorage
Use the dbcli list-dbstorages command to list the database storage in the DB System.
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli describe-dbstorage command to show detailed information about a specific
database storage resource.
SYNTAX
PARAMETERS
-i --id Defines the database storage ID. Use the dbcli list-
dbstorages command to get the database storage ID.
E XAMPLE
The following command displays the database storage details for 105a2db2-625a-45ba-8bdd-
ee46da0fd83a:
[root@dbsys ~]# dbcli describe-dbstorage -i 105a2db2-625a-45ba-8bdd-ee46da0fd83a
DBStorage details
----------------------------------------------------------------
ID: 105a2db2-625a-45ba-8bdd-ee46da0fd83a
DB Name: db1
DBUnique Name: db1
DB Resource ID: 439e7bd7-f717-447a-8046-08b5f6493df0
Storage Type:
DATA Location: /u02/app/oracle/oradata/db1
RECO Location: /u03/app/oracle/fast_recovery_area/
REDO Location: /u03/app/oracle/redo/
State: ResourceState(status=Configured)
Created: July 3, 2016 4:19:21 AM UTC
UpdatedTime: July 3, 2016 4:41:29 AM UTC
Use the dbcli create-dbstorage command to create the database storage layout without
creating the complete database. This is useful for database migration and standby database
creation.
SYNTAX
PARAMETERS
E XAMPLE
The following command creates database storage with a storage type of ACFS:
[root@dbsys ~]# dbcli create-dbstorage -r ACFS -n testdb -u testdbname
{
"jobId" : "5884a77a-0577-414f-8c36-1e9d8a1e9cee",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467952215102,
"description" : "Database storage service creation with db name: testdb",
"updatedTime" : 1467952215103
}
Use the dbcli delete-dbstorage command to delete database storage that is not being used
by the database. A error occurs if the resource is in use.
SYNTAX
PARAMETERS
-i --id The database storage ID to delete. Use the dbcli list-
dbstorages command to get the database storage ID.
E XAMPLE
{
"jobId" : "467c9388-18c6-4e1a-8655-2fd3603856ef",
"status" : "Running",
"message" : null,
"reports" : [ ],
"createTimestamp" : 1467952336843,
"description" : "Database storage service deletion with id: f444dd87-86c9-4969-a72c-fb2026e7384b",
"updatedTime" : 1467952336856
}
Dbsystem Command
Use the dbcli describe-dbsystem command to display details about the DB System. On a 2-
node RAC DB System, the command provides information about the local node.
SYNTAX
PARAMETERS
E XAMPLE
Appliance Information
----------------------------------------------------------------
ID: a083a81b-d204-4aae-a891-31eccaa92be6
Platform: BmIaaSSi
Data Disk Count: 4
CPU Core Count: 36
Created: September 2, 2016 4:03:47 PM UTC
System Information
----------------------------------------------------------------
Name: wdcxgd5a
Domain Name: asdfasdfasdf.asdfdfasdfasdfasdf.fasdisdfkkfasd.com
Time Zone: UTC
DB Edition: SE
DNS Servers:
NTP Servers:
DcsCli Details
----------------------------------------------------------------
Implementation-Version: jenkins-dcs-cli-350
Archiver-Version: Plexus Archiver
Built-By: aime
Created-By: Apache Maven 3.3.9
Build-Jdk: 1.8.0_92
Implementation-Id: b5413850b54fdf330231c0ae4b761fa4c364c5bc
Manifest-Version: 1.0
Main-Class: com.oracle.oda.dcscli.DcsCli
DcsAgent Details
----------------------------------------------------------------
Version: 1.0-SNAPSHOT
BuildNumber: jenkins-dcs-agent-426
GitNumber: 366ec7fd136670781ea5e8345cc2f5272474deef
BuildTime: 2016-09-02_1627 UTC
Job Commands
l dbcli describe-job
l dbcli list-jobs
Use the dbcli describe-job command to display details about a specific job.
SYNTAX
PARAMETERS
-i --jobid Identifies the job. Use the dbcli list-jobs command to get
the jobid.
E XAMPLE
The following command displays details about the specified job ID:
[root@dbsys ~]# dbcli describe-job -i 74731897-fb6b-4379-9a37-246912025c17
Job details
----------------------------------------------------------------
ID: 74731897-fb6b-4379-9a37-246912025c17
Description: Backup service creation with db name: dbtst
Status: Success
Created: November 18, 2016 8:33:04 PM UTC
Message:
Use the dbcli list-jobs command to display a list of jobs, including the job IDs, status, and
the job
SYNTAX
PARAMETERS
E XAMPLE
ID Description
Created Status
---------------------------------------- ---------------------------------------------------------------
------------ ----------------------------------- ----------
0a362dac-0339-41b5-9c9c-4d229e363eaa Database service creation with db name: db11
November 10, 2016 11:37:54 AM UTC Success
9157cc78-b487-4ee9-9f46-0159f10236e4 Database service creation with db name: jhfpdb
November 17, 2016 7:19:59 PM UTC Success
013c408d-37ca-4f58-a053-02d4efdc42d0 create backup config:myBackupConfig
November 18, 2016 8:28:14 PM UTC Success
921a54e3-c359-4aea-9efc-6ae7346cb0c2 update database id:80ad855a-5145-4f8f-a08f-406c5e4684ff
November 18, 2016 8:32:16 PM UTC Success
74731897-fb6b-4379-9a37-246912025c17 Backup service creation with db name: dbtst
November 18, 2016 8:33:04 PM UTC Success
40a227b1-8c47-46b9-a116-48cc1476fc12 Creating a report for database 80ad855a-5145-4f8f-a08f-
406c5e4684ff November 18, 2016 8:41:39 PM UTC Success
Latestpatch Command
Note
Use the dbcli describe-latestpatch command show the latest patches applicable to the
DB System and available in Oracle Cloud Infrastructure Object Storage.
This command requires a valid Object Storage credentials configuration. Use the dbcli create-
bmccredential command to create the configuration if you haven't already done so. If the
configuration is missing or invalid, the command fails with the error: Failed to connect to
the object store. Please provide valid details.
For more information about updating the CLI, creating the credentials configuration, and
applying patches, see Patching a DB System.
SYNTAX
PARAMETERS
E XAMPLE
componentType availableVersion
--------------- --------------------
gi 12.1.0.2.161018
db 11.2.0.4.161018
db 12.1.0.2.161018
oak 12.1.2.10.0
Netsecurity Commands
The following commands are available to manage network encryption on the DB System:
l dbcli describe-netsecurity
l dbcli update-netsecurity
Use the dbcli describe-netsecurity command to display the current network encryption
setting for a database home.
SYNTAX
PARAMETERS
E XAMPLE
The following command displays the encryption setting for specified database home:
[root@dbsys ~]# dbcli describe-netsecurity -H 16c96a9c-f579-4a4c-a645-8d4d22d6889d
NetSecurity Rules
----------------------------------------------------------------
DatabaseHomeID: 16c96a9c-f579-4a4c-a645-8d4d22d6889d
Role: Server
EncryptionAlgorithms: AES256 AES192 AES128
IntegrityAlgorithms: SHA1
ConnectionType: Required
Role: Client
EncryptionAlgorithms: AES256 AES192 AES128
IntegrityAlgorithms: SHA1
ConnectionType: Required
Use the dbcli update-netsecurity command to update the Oracle Net security
configuration on the DB System.
SYNTAX
PARAMETERS
-H --dbHomeId Defines the database home ID. Use the dbcli list-
dbhomes command to get the dbHomeId.
E XAMPLE
Objectstoreswift Commands
You can back up a database to an existing bucket in the Oracle Cloud Infrastructure Object
Storage service by using the dbcli create-backup command, but first you'll need to:
1. Create an object store on the DB System, which contains the endpoint and credentials to
access Object Storage, by using the dbcli create-objectstoreswift command.
2. Create a backup configuration that refers to the object store ID and the bucket name by
using the dbcli create-backupconfig command.
3. Associate the backup configuration to the database by using the dbcli update-database
command.
l dbcli create-objectstoreswift
l dbcli describe-objectstoreswift
l dbcli list-objectstoreswifts
Note
SYNTAX
PARAMETERS
https://
swiftobjectstorage.<region>.oraclecloud.com/v1
-u --username The user name for the Oracle Cloud Infrastructure user
account, for example:
E XAMPLE
The following command creates an object store and prompts for the Swift password:
[root@dbsys ~]# dbcli create-objectstoreswift -n r2swift -t CompanyABC -u [email protected] -e
https://swiftobjectstorage.<region>.oraclecloud.com/v1 -p
Password for Swift:
{
"jobId" : "c565bb71-f67b-4fab-9d6f-a34eae36feb7",
"status" : "Created",
"message" : "Create object store swift",
"reports" : [ ],
"createTimestamp" : "January 19, 2017 11:11:33 AM PST",
"resourceList" : [ {
"resourceId" : "8a0fe039-f5d4-426a-8707-256c612b3a30",
"resourceType" : "ObjectStoreSwift",
"jobId" : "c565bb71-f67b-4fab-9d6f-a34eae36feb7",
"updatedTime" : "January 19, 2017 11:11:33 AM PST"
} ],
"description" : "create object store:biyanr2swift",
"updatedTime" : "January 19, 2017 11:11:33 AM PST"
}
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli list-objectstoreswifts command to list the object stores on a DB System.
SYNTAX
PARAMETERS
E XAMPLE
Recovery Commands
The following commands are available to initiate a database recovery and list recovery jobs:
l dbcli create-recovery
l dbcli list-recovery
SYNTAX
PARAMETERS
-r -- Defines the time for the end point of the recovery. The
recoveryTimeStamp format is MM/DD/YYYY HH24:MI:SS, for example,
08/09/2016 05:12:15.
-s -scn Defines the system change number (SCN) for the end
point of the recovery, when the specified recovery
type is SCN.
E XAMPLE
Use the dbcli list-recovery command to see information about recovery jobs.
SYNTAX
PARAMETERS
E XAMPLE
Schedule Commands
You can run back up jobs automatically by using backup schedules. A default backup schedule
is automatically created for every database that is associated with a backup configuration.
The backup configuration controls the backup destination and recovery window. The schedule
uses a cron expression to control when and how often the back up job runs. The default cron
expression is 0 2 4 1/1 * ? *, so the back up job is scheduled to run daily at 4:02 AM. You
can update the cron expression as needed using the dbcli update-schedule command. You
can use a cron utility such as the CronMaker utility to help build expressions. For more
information, see http://www.cronmaker.com.
Note
l dbcli list-schedules
l dbcli describe-schedule
l dbcli update-schedule
l dbcli list-scheduledExecutions
SYNTAX
PARAMETERS
E XAMPLE
ID Name Description
CronExpression
SYNTAX
PARAMETERS
E XAMPLE
Use the dbcli update-schedule command to enable or disable a schedule, or update the
cron expression that controls when and how frequently the job is scheduled. You can use the
CronMaker utility to help build cron expressions. For more information, see
http://www.cronmaker.com.
Note
SYNTAX
PARAMETERS
-e --enable
E XAMPLE
The following command updates the backup schedule to run daily at 12:00 by updating the
cron expression to '0 0 12 1/1 * ? *'.
[root@dbsys ~]# dbcli update-schedule -i 481856f9-2cdd-45f8-b0b0-11cc8c48970a -x '0 0 12 1/1 * ? *'
You can use the dbcli describe-schedule command to get more information about the
update.
Use the dbcli list-scheduledExecutions command to list the jobs that have been
executed by existing schedules.
SYNTAX
PARAMETERS
E XAMPLE
The following command lists all the jobs executed for all schedules:
[root@dbsys ~]# dbcli list-scheduledExecutions
ID ScheduledId JobId
Status Executed Time
-------------------------------------- -------------------------------------- --------------------------
------------ --------- -------------------------------
23691445-9120-4a34-9bcd-dcbd382dc455 22fbd334-ebbd-40f4-af79-9eb2360cfb77 c7f32a6c-f835-45f8-a5ef-
ff04473d3a73 Executed September 5, 2016 8:52:00 PM UTC
54071c4e-98db-41c8-92b6-b5f92ca11631 a7623340-ece7-4618-a7d9-8efb353378b4 b663fb73-e531-43bf-9939-
0b404b62e8ce Executed September 6, 2016 12:00:00 AM UTC
Note the JobID in the command output. You can use the dbcli describe-job with the JobID
to get more information about the job.
Server Command
Use the dbcli update-server command to apply patches to the server components in the
DB System. For more information about applying patches, see Patching a DB System.
Note
SYNTAX
PARAMETERS
E XAMPLE
The following commands update the server and show the output from the update job:
[root@dbsys ~]# dbcli update-server
{
"jobId" : "9a02d111-e902-4e94-bc6b-9b820ddf6ed8",
"status" : "Created",
"reports" : [ ],
"createTimestamp" : "January 19, 2017 09:37:11 AM PST",
"resourceList" : [ ],
"description" : "Server Patching",
"updatedTime" : "January 19, 2017 09:37:11 AM PST"
}
Job details
----------------------------------------------------------------
ID: 9a02d111-e902-4e94-bc6b-9b820ddf6ed8
Description: Server Patching
Status: Running
Created: January 19, 2017 9:37:11 AM PST
Message:
TDE Command
Use the dbcli update-tdekey command to update the TDE encryption key inside the TDE
wallet. You can update the encryption key for Pluggable Databases (if -pdbNames are
specified), and/or the Container Database (if -rootDatabase is specified).
SYNTAX
PARAMETERS
-t -tagName Defines the TagName used to backup the wallet. The default
is OdaRotateKey.
E XAMPLE
The following command updates the key for pdb1 and pdb2 only:
[root@dbsys ~]# dbcli update-tdekey -dbid ee3eaab6-a45b-4e61-a218-c4ba665503d9 -p -n pdb1,pdb2
The following command updates pdb1, pdb2, and the container database:
[root@dbsys ~]# dbcli update-tdekey -dbid ee3eaab6-a45b-4e61-a218-c4ba665503d9 -p -n pdb1,pdb2 -r
Admin Commands
Use the dbadmcli manage diagcollect command to collect diagnostic information about a
DB System for troubleshooting purposes, and for working with Oracle Support Services.
SYNTAX
PARAMETERS
Parameter Description
E XAMPLE
DBADMCLI POWER
Note
SYNTAX
PARAMETERS
Parameter Description
name Defines the disk resource name. The resource name format is pd_[0..3]. Use
the dbadmcli show disk command to get the disk resource name.
Use the dbadmcli power disk status command to display the current power status of a
disk.
SYNTAX
PARAMETERS
Parameter Description
name Identifies a specific disk resource name. The resource name format is pd_
[0..3]. For example, pd_01.
E XAMPLE
Use the dbadmcli show controller command to display details of the controller.
SYNTAX
PARAMETER
Parameter Description
controller_ The ID number of the controller. Use the dbadmcli show storage command
id to get the ID.
Use the dbadmcli show disk command to display the status of a single disk or all disks on
the DB System.
SYNTAX
PARAMETERS
Parameter Description
Parameter Description
-getlog (Optional) Displays all the SMART log entries for an NVMe disk.
name (Optional) Identifies a specific disk resource name. The resource name
format is pd_[0..3]. If omitted, the command displays information about all
disks on the system.
E XAMPLES
Initialized : 0
IsConfigDepende : false
ModelNum : MS1PC2DD3ORA3.2T
MonitorFlag : 1
MultiPathList : |/dev/nvme2n1|
Name : pd_00
NewPartAddr : 0
OSUserType : |userType:Multiuser|
PlatformName : X5_2_LITE_IAAS
PrevState : Invalid
PrevUsrDevName :
SectorSize : 512
SerialNum : S2LHNAAH502855
Size : 3200631791616
SlotNum : 0
SmartDiskWarnin : 0
SmartTemperatur : 32
State : Online
StateChangeTs : 1467176081
StateDetails : Good
TotalSectors : 6251233968
TypeName : 0
UsrDevName : NVD_S00_S2LHNAAH502855
VendorName : Samsung
gid : 0
mode : 660
uid : 0
Use the dbadmcli show diskgroup command to list configured diskgroups or display a
specific diskgroup configuration.
SYNTAX
PARAMETERS
Parameter Description
E XAMPLES
DiskGroups
----------
DATA
RECO
Use the dbadmcli show env_hw command to display the environment type and hardware
version of the current DB System.
SYNTAX
PARAMETER
Parameter Description
DBADMCLI SHOW FS
SYNTAX
PARAMETER
Parameter Description
Use the dbadmcli show storage command to show the storage controllers, expanders, and
disks.
SYNTAX
PARAMETERS
Parameter Description
E XAMPLE
Id = 0
Pci Slot = 13
Serial Num = S2LHNAAH504431
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:25:00.00
Pci Address = 25:00.0
Id = 1
Pci Slot = 12
Serial Num = S2LHNAAH505449
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:27:00.00
Pci Address = 27:00.0
Id = 2
Pci Slot = 10
Serial Num = S2LHNAAH503573
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:29:00.00
Pci Address = 29:00.0
Id = 3
Pci Slot = 11
Serial Num = S2LHNAAH503538
Vendor = Samsung
Model = MS1PC2DD3ORA3.2T
FwVers = KPYA8R3Q
strId = nvme:2b:00.00
Pci Address = 2b:00.0
DBADMCLI STORDIAG
Use the dbadmcli stordiag command to collect detailed information for each disk or NVM
Express (NVMe).
SYNTAX
PARAMETERS
Parameter Description
name Defines the disk resource name. The resource name format is pd_[0..3].
E XAMPLE
l Use the OLTP templates if your database workload is primarily online transaction
processing (OLTP).
l Use the DSS templates if your database workload is primarily decision support (DSS) or
data warehousing.
l Use the in-memory (IMDB) templates if your database workload can fit in memory, and
can benefit from in-memory performance capabilities.
The following tables describe the templates for each type of workload.
Template CPU SGA PGA Flash Processes Redo Log File Log
Cores (GB) (GB) (GB) Size (GB) Buffer
(MB)
odb1s 1 2 1 6 200 1 16
odb1 1 4 2 12 200 1 16
odb2 2 8 4 24 400 1 16
odb4 4 16 8 48 800 1 32
odb6 6 24 12 72 1200 2 64
Template CPU SGA PGA Processes Redo Log File Log Buffer
Cores (GB) (GB) Size (GB) (MB)
odb1s 1 1 2 200 1 16
odb1 1 2 4 200 1 16
odb2 2 4 8 400 1 16
odb4 4 8 16 800 1 32
odb6 6 12 24 1200 2 64
odb8 8 16 32 1600 2 64
odb10 10 20 40 2000 2 64
odb12 12 24 48 2400 4 64
odb16 16 32 64 3200 4 64
odb20 20 40 80 4000 4 64
odb24 24 48 96 4800 4 64
odb1s 1 2 1 1 200 1 16
odb1 1 4 2 2 200 1 16
odb2 2 8 4 4 400 1 16
odb4 4 16 8 8 800 1 32
odb6 6 24 12 12 1200 2 64
odb8 8 32 16 16 1600 2 64
odb10 10 40 20 20 2000 2 64
odb12 12 48 24 24 2400 4 64
odb16 16 64 32 32 3200 4 64
odb20 20 80 40 40 4000 4 64
odb24 24 96 48 48 4800 4 64
including the version, character set, and platform endian format of the source and target
databases.
Some of the characteristics and factors to consider when choosing a migration method are:
To determine which migration methods are applicable to your migration scenario, gather the
following information.
2. For on-premises Oracle Database 12c Release 2 and Oracle Database 12c Release 1
databases, the architecture of the database:
l Multitenant container database (CDB)
l Non-CDB
3. Endian format (byte ordering) of your on-premises database’s host platform
Some platforms are little endian and others are big endian. Query V$TRANSPORTABLE_
PLATFORM to identify the endian format, and to determine whether cross-platform
tablespace transport is supported.
The Oracle Cloud Infrastructure Database uses the Linux platform, which is little endian.
4. Database character set of your on-premises database and the Oracle Cloud
Infrastructure Database database.
Some migration methods require that the source and target databases use compatible
database character sets.
5. Database version of the Oracle Cloud Infrastructure Database database you are
migrating to
l Oracle Database 12c Release 2
l Oracle Database 12c Release 1
l Oracle Database 11g Release 2
Oracle Database 12c Release 2 and Oracle Database 12c Release 1 databases created
on the Database service use CDB architecture. Databases created using the Enterprise
Edition software edition are single-tenant, and databases created using the High
Performance or Extreme Performance software editions are multitenant.
After gathering this information, use the “source” and “destination” database versions as your
guide to see which migration methods apply to your migration scenario:
l Migrating from Oracle Database 11g to Oracle Database 11g in the Cloud
l Migrating from Oracle Database 11g to Oracle Database 12c in the Cloud
l Migrating from Oracle Database 12c CDB to Oracle Database 12c in the Cloud
l Migrating from Oracle Database 12c Non-CDB to Oracle Database 12c in the Cloud
1. FastConnect: Provides a secure connection between your existing network and your
virtual cloud network (VCN) over a private physical network instead of the internet. For
more information, see FastConnect Overview.
2. IPSec VPN: Provides a secure connection between a dynamic routing gateway (DRG)
and customer-premise equipment (CPE), consisting of multiple IPSec tunnels. The IPSec
connection is one of the components forming a site-to-site VPN between a VCN and your
on-premises network. For more information, see IPSec VPNs.
3. Internet gateway: Provides a path for network traffic between your VCN and the
internet. For more information, see Connectivity to the Internet.
Migration Methods
Many methods exist to migrate Oracle databases to the Oracle Cloud Infrastructure
Databaseservice. Which of these methods apply to a given migration scenario depends on
several factors, including the version, character set, and platform endian format of the source
and target databases.
The applicability of some of the migration methods depends on the on-premises database’s
character set and platform endian format.
If you have not already done so, determine the database character set of your on-premises
database, and determine the endian format of the platform your on-premises database
resides on. Use this information to help you choose an appropriate method.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump.
l RMAN CONVERT Transportable Tablespace with Data Pump
This method can be used only if the database character sets of your on-premises
database and the Oracle Cloud Infrastructure Database database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the
addition of the RMAN CONVERT command to enable transport between platforms with
different endianness. Query V$TRANSPORTABLE_PLATFORM to determine if the on-
premises database platform supports cross-platform tablespace transport and to
determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with
Data Pump.
Migrating from Oracle Database 11g to Oracle Database 12c in the Cloud
You can migrate Oracle Database 11g databases from on-premises to Oracle Database 12c
databases in the Database service using several different methods.
The applicability of some of the migration methods depends on the on-premises database’s
version, database character set and platform endian format.
If you have not already done so, determine the database version and database character set
of your on-premises database, and determine the endian format of the platform your on-
premises database resides on. Use this information to help you choose an appropriate
method.
This method can be used only if the on-premises platform is little endian, and the
database character sets of your on-premises database and the Databaseservice
database are compatible.
For the steps this method entails, see Data Pump Transportable Tablespace.
l RMAN Transportable Tablespace with Data Pump
This method can be used only if the on-premises platform is little endian, and the
database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump.
l RMAN CONVERT Transportable Tablespace with Data Pump
This method can be used only if the database character sets of your on-premises
database and the Database service database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the
addition of the RMAN CONVERT command to enable transport between platforms with
different endianness. Query V$TRANSPORTABLE_PLATFORM to determine if the on-
premises database platform supports cross-platform tablespace transport and to
determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with
Data Pump.
l Data Pump Full Transportable
This method can be used only if the source database release version is 11.2.0.3 or later,
and the database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see Data Pump Full Transportable.
Migrating from Oracle Database 12c CDB to Oracle Database 12c in the
Cloud
You can migrate Oracle Database 12c CDB databases from on-premises to Oracle Database
12c databases in the Oracle Cloud Infrastructure Database service using several different
methods.
The applicability of some of the migration methods depends on the on-premises database’s
character set and platform endian format.
If you have not already done so, determine the database character set of your on-premises
database, and determine the endian format of the platform your on-premises database
resides on. Use this information to help you choose an appropriate method.
For the steps this method entails, see SQL Developer and SQL*Loader to Migrate
Selected Objects.
l SQL Developer and INSERT Statements to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be
loaded into your Oracle Database 12c database on the cloud. In this method, you use
SQL INSERT statements to load the data into your cloud database.
For the steps this method entails, see SQL Developer and INSERT Statements to Migrate
Selected Objects.
The applicability of some of the migration methods depends on the on-premises database’s
character set and platform endian format.
If you have not already done so, determine the database character set of your on-premises
database, and determine the endian format of the platform your on-premises database
resides on. Use this information to help you choose an appropriate method.
This method can be used only if the on-premises platform is little endian, and the
database character sets of your on-premises database and the Database service
database are compatible.
For the steps this method entails, see RMAN Transportable Tablespace with Data Pump.
l RMAN CONVERT Transportable Tablespace with Data Pump
This method can be used only if the database character sets of your on-premises
database and the Database service database are compatible.
This method is similar to the Data Pump Transportable Tablespace method, with the
addition of the RMAN CONVERT command to enable transport between platforms with
different endianness. Query V$TRANSPORTABLE_PLATFORM to determine if the on-
premises database platform supports cross-platform tablespace transport and to
determine the endian format of the platform. The Database service platform is little-
endian format.
For the steps this method entails, see RMAN CONVERT Transportable Tablespace with
Data Pump.
l RMAN Cross-Platform Transportable Tablespace Backup Sets
This method can be used only if the database character sets of your on-premises
database and the Database database are compatible.
For the steps this method entails, see RMAN Cross-Platform Transportable Tablespace
Backup Sets.
l Data Pump Full Transportable
This method can be used only if the database character sets of your on-premises
database and the Database service database are compatible.
For the steps this method entails, see Data Pump Full Transportable.
l Unplugging/Plugging (non-CDB)
This method can be used only if the on-premises platform is little endian, and the on-
premises database and Database service database have compatible database character
sets and national character sets.
You can use the unplug/plug method to migrate an Oracle Database 12c non-CDB
database to Oracle Database 12c in the cloud. This method provides a way to
consolidate several non-CDB databases into a single Oracle Database 12c CDB on the
cloud.
For the steps this method entails, see Unplugging/Plugging Non-CDB.
l Remote Cloning (non-CDB)
This method can be used only if the on-premises platform is little endian, the on-
premises database release is 12.1.0.2 or higher, and the on-premises database and
Database service database have compatible database character sets and national
character sets.
You can use the remote cloning method to copy an Oracle Database 12c non-CDB on-
premises database to your Oracle Database 12c database in the cloud.
For the steps this method entails, see Remote Cloning Non-CDB.
l SQL Developer and SQL*Loader to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be
loaded into your Oracle Database 12c database on the cloud. In this method, you use
SQL*Loader to load the data into your cloud database.
For the steps this method entails, see SQL Developer and SQL*Loader to Migrate
Selected Objects.
l SQL Developer and INSERT Statements to Migrate Selected Objects
You can use SQL Developer to create a cart into which you add selected objects to be
loaded into your Oracle Database 12c database on the cloud. In this method, you use
SQL INSERT statements to load the data into your cloud database.
For the steps this method entails, see SQL Developer and INSERT Statements to Migrate
Selected Objects.
1. On the on-premises database host, invoke Data Pump Export and export the on-
premises database.
2. Use a secure copy utility to transfer the dump file to the Database service compute
node.
3. On the Database service compute node, invoke Data Pump Import and import the data
into the database.
4. After verifying that the data has been imported successfully, you can delete the dump
file.
For information about Data Pump Import and Export, see these topics:
l "Data Pump Export Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.
l "Data Pump Import Modes" in Oracle Database Utilities for Release 12.2, 12.1 or 11.2.
This example illustrates a schema mode export and import. The same general procedure
applies for a full database, tablespace, or table export and import.
1. On the on-premises database host, invoke Data Pump Export to export the schemas.
a. On the on-premises database host, create an operating system directory to use
for the on-premises database export files.
$ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the on-premises database host, invoke SQL*Plus and log in to the on-premises
database as the SYSTEM user.
$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
system directory.
SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/dpdump/for_cloud';
2. Use a secure copy utility to transfer the dump file to the Database service compute
node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate
location based on the size of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.
$ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
b. Before using the scp command to copy the export dump file, make sure the SSH
private key that provides access to the Database service compute node is
available on your on-premises host.
c. On the on-premises database host, use the SCP utility to transfer the dump file to
the Databaseservice compute node.
$ scp –i private_key_file \
/u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
3. On the Database service compute node, invoke Data Pump Import and import the data
into the database.
a. On the Database service compute node, invoke SQL*Plus and log in to the
database as the SYSTEM user.
$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
c. If they do not exist, create the tablespace(s) for the objects that will be imported.
d. Exit from SQL*Plus.
e. On the Database service compute node, invoke Data Pump Import and connect to
the database. Import the data into the database.
impdp system SCHEMAS=fsowner DIRECTORY=dp_from_onprem
4. After verifying that the data has been imported successfully, you can delete the
expdat.dmp file.
You can use the Data Pump full transportable method to copy an entire database from your
on-premises host to the database on a Database service database deployment.
To migrate an Oracle Database 11g on-premises database to the Oracle Database 12c
database on a Database service database deployment using the Data Pump full transportable
method, you perform these tasks:
1. On the on-premises database host, prepare the database for the Data Pump full
transportable export by placing the user-defined tablespaces in READ ONLY mode.
2. On the on-premises database host, invoke Data Pump Export to perform the full
transportable export.
3. Use a secure copy utility to transfer the Data Pump Export dump file and the datafiles
for all of the user-defined tablespaces to the Database service compute node.
4. Set the on-premises tablespaces back to READ WRITE.
5. On the Database service compute node, prepare the database for the tablespace import.
6. On the Database service compute node, invoke Data Pump Import and connect to the
database.
7. After verifying that the data has been imported successfully, you can delete the dump
file.
1. On the source database host, prepare the database for the Data Pump full transportable
export.
a. On the source database host, create a directory in the operating system to use for
the source export.
$ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the source database host, invoke SQL*Plus and log in to the source database
as the SYSTEM user.
$ sqlplus system
Enter password: <enter the password for the SYSTEM user>
d. Determine the name(s) of the tablespaces and data files that belong to the user-
defined tablespaces by querying DBA_DATA_FILES. These files will also be listed in
the export output.
SQL> SELECT tablespace_name, file_name FROM dba_data_files;
TABLESPACE_NAME FILE_NAME
--------------- --------------------------------------------------
USERS /u01/app/oracle/oradata/orcl/users01.dbf
UNDOTBS1 /u01/app/oracle/oradata/orcl/undotbs01.dbf
SYSAUX /u01/app/oracle/oradata/orcl/sysaux01.dbf
SYSTEM /u01/app/oracle/oradata/orcl/system01.dbf
EXAMPLE /u01/app/oracle/oradata/orcl/example01.dbf
FSDATA /u01/app/oracle/oradata/orcl/fsdata01.dbf
FSINDEX /u01/app/oracle/oradata/orcl/fsindex01.dbf
SQL>
e. On the source database host, set all tablespaces that will be transported (the
transportable set) to READ ONLY mode.
SQL> ALTER TABLESPACE example READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE fsindex READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE fsdata READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE users READ ONLY;
Tablespace altered.
SQL>
3. Use a secure copy utility to transfer the Data Pump Export dump file and the datafiles
for all of the user-defined tablespaces to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate
location based on the size of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.
$ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_source
b. Before using the scp utility to copy files, make sure the SSH private key that
provides access to the Database service compute node is available on your source
host.
c. On the source database host, use the scp utility to transfer the dump file and all
$ scp -i private_key_file \
/u01/app/oracle/oradata/orcl/example01.dbf \
oracle@compute_node_IP_address:/u02/app/oracle/oradata/ORCL/PDB2
$ scp -i private_key_file \
/u01/app/oracle/oradata/orcl/fsdata01.dbf \
oracle@compute_node_IP_address:/u02/app/oracle/oradata/ORCL/PDB2
$ scp -i private_key_file \
/u01/app/oracle/oradata/orcl/fsindex01.dbf \
oracle@compute_node_IP_address:/u02/app/oracle/oradata/ORCL/PDB2
$ scp -i private_key_file \
/u01/app/oracle/oradata/orcl/users01.dbf \
oracle@compute_node_IP_address:/u02/app/oracle/oradata/ORCL/PDB2
6. On the Database service compute node, invoke Data Pump Import and connect to the
PDB.
Import the data into the database using the TRANSPORT_DATAFILES option.
$ impdp system@PDB2 FULL=y DIRECTORY=dp_from_source \
TRANSPORT_
DATAFILES='/u02/app/oracle/oradata/ORCL/PDB2/example01.dbf',\
'/u02/app/oracle/oradata/ORCL/PDB2/fsdata01.dbf',\
'/u02/app/oracle/oradata/ORCL/PDB2/fsindex01.dbf,'\
'/u02/app/oracle/oradata/ORCL/PDB2/users01.dbf'
7. After verifying that the data has been imported successfully, you can delete the
expdat.dmp dump file.
1. On the on-premises database host, prepare the database for the Data Pump
transportable tablespace export.
2. On the on-premises database host, invoke Data Pump Export to perform the
transportable tablespace export.
3. Use a secure copy utility to transfer the Data Pump Export dump file and the tablespace
datafiles to the Database service compute node.
1. On the on-premises database host, prepare the database for the Data Pump
transportable tablespace export.
a. On the on-premises database host, create a directory in the operating system to
use for the on-premises export.
mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the on-premises database host, invoke SQL*Plus and log in to the on-premises
database as the SYSTEM user.
sqlplus system
Enter password: <enter the password for the SYSTEM user>
d. Determine the name(s) of the datafiles that belong to the FSDATA and FSINDEX
tablespaces by querying DBA_DATA_FILES. These files will also be listed in the
export output.
SQL> SELECT file_name FROM dba_data_files
2 WHERE tablespace_name = 'FSDATA';
FILE_NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/orcl/fsdata01.dbf
FILE_NAME
-----------------------------------------------------------------
/u01/app/oracle/oradata/orcl/fsindex01.dbf
e. On the on-premises database host, set all tablespaces that will be transported
(the transportable set) to READ ONLY mode.
SQL> ALTER TABLESPACE fsindex READ ONLY;
Tablespace altered.
SQL> ALTER TABLESPACE fsdata READ ONLY;
Tablespace altered.
3. Use a secure copy utility to transfer the Data Pump Export dump file and the tablespace
datafiles to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate
location based on the size of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.
mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
b. Before using the scp utility to copy files, make sure the SSH private key that
provides access to the Database service compute node is available on your on-
premises host.
c. On the on-premises database host, use the scp utility to transfer the dump file
and all datafiles of the transportable set to the Database service compute node.
scp -i private_key_file \
/u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
a. On the Database service compute node, invoke SQL*Plus and log in to the
database as the SYSTEM user.
b. Create a directory object in the Database service database.
SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/dpdump/from_onprem';
c. If the owners of the objects that will be imported do not exist in the database,
create them before performing the import. The transportable tablespace mode of
import does not create the users.
SQL> CREATE USER fsowner
2 PROFILE default
3 IDENTIFIED BY fspass
4 TEMPORARY TABLESPACE temp
5 ACCOUNT UNLOCK;
6. On the Database service compute node, invoke Data Pump Import and connect to the
database.
Import the data into the database using the TRANSPORT_DATAFILES option.
impdp system DIRECTORY=dp_from_onprem \
TRANSPORT_DATAFILES='/u02/app/oracle/oradata/ORCL/fsdata01.dbf', \
'/u02/app/oracle/oradata/ORCL/fsindex01.dbf'
7. Set the tablespaces on the Database service database to READ WRITE mode.
a. Invoke SQL*Plus and log in as the SYSTEM user.
b. Set the FSDATA and FSINDEX tablespaces to READ WRITE mode.
SQL> ALTER TABLESPACE fsdata READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE fsindex READ WRITE;
Tablespace altered.
You can use the remote cloning method to copy a PDB from your on-premises Oracle
Database 12c database to a PDB in an Oracle Database 12c database on the Database service.
Migration Tasks
To migrate an Oracle Database 12c PDB to a PDB in a Database service database deployment
using the remote cloning method, you perform these tasks:
1. On the on-premises database host, invoke SQL*Plus and close the on-premises PDB and
then reopen it in READ ONLY mode.
2. On the Database service compute node, invoke SQL*Plus and create a database link that
enables a connection to the on-premises database.
3. On the Database service compute node, execute the CREATE PLUGGABLE DATABASE
command to clone the on-premises PDB.
4. On the Database compute node, open the new PDB by executing the ALTER PLUGGABLE
DATABASE OPEN command.
5. Optionally, on the on-premises database host invoke SQL*Plus and set the on-premises
PDB back to READ WRITE mode.
For more information, see "Cloning a Remote PDB or Non-CDB" in Oracle Database
Administrator's Guide for Release 12.2 or 12.1.
You can use the remote cloning method to copy an Oracle Database 12c non-CDB on-premises
database to a PDB in an Oracle Database 12c database on the Databaseservice.
Migration Tasks
1. On the on-premises database host, invoke SQL*Plus and set the on-premises database
to READ ONLY mode.
2. On the Database service compute node, invoke SQL*Plus and create a database link that
enables a connection to the on-premises database.
3. On the Database service compute node, execute the CREATE PLUGGABLE DATABASE
command to clone the on-premises non-CDB database.
4. On the Database service compute node, execute the $ORACLE_
HOME/rdbms/admin/noncdb_to_pdb.sql script.
5. On the Database service compute node, open the new PDB by executing the ALTER
PLUGGABLE DATABASE OPEN command.
6. Optionally, on the on-premises database host invoke SQL*Plus and set the on-premises
database back to READ WRITE mode.
For more information, see "Cloning a Remote PDB or Non-CDB" in Oracle Database
Administrator's Guide for Release 12.2 or 12.1.
To migrate an Oracle Database 12c PDB to a PDB in an Oracle Database 12c database on a
Database service deployment using the RMAN cross-platform transportable PDB method, you
perform these tasks:
1. On the on-premises database host, invoke SQL*Plus and close the on-premises PDB.
2. On the on-premises database host, execute the ALTER PLUGGABLE DATABASE UNPLUG
command to generate an XML file containing the list of datafiles that will be plugged in
For more information, see " Performing Cross-Platform Data Transport in CDBs and PDBs" in
Oracle Database Backup and Recovery User's Guide for Release 12.2 or 12.1.
Note
To migrate Oracle Database 12c on-premises tablespaces to an Oracle Database 12c database
on a Database service deployment using the RMAN cross-platform transportable backup sets
method, you perform these tasks:
1. On the on-premises database host, prepare the database by placing the user-defined
tablespaces that you intend to transport in READ ONLY mode.
2. On the on-premises database host, invoke RMAN and use the BACKUP command with the
TO PLATFORM or FOR TRANSPORT clause and the DATAPUMP clause to create a backup set
for cross-platform transport. See in "BACKUP" in Oracle Database Backup and Recovery
Reference for Release 12.2 or 12.1 for more information on the BACKUP command.
3. Use a secure copy utility to transfer the backup sets, including the Data Pump export
dump file, to the Database service compute node.
4. Set the on-premises tablespaces back to READ WRITE.
5. On the Database service compute node, prepare the database by creating the required
schemas.
6. On the Database service compute node, invoke RMAN and use the RESTORE command
with the foreignFileSpec subclause to restore the cross-platform backup.
7. On the Database service compute node, set the tablespaces on the database to READ
WRITE mode.
For more information, see "Overview of Cross-Platform Data Transport Using Backup Sets" in
Oracle Database Backup and Recovery User’s Guide for Release 12.2 or 12.1.
1. On the on-premises database host, prepare the database by creating a directory for the
export dump file and placing the user-defined tablespaces that you intend to transport in
READ ONLY mode..
a. On the on-premises database host, create a directory in the operating system to
use for the export dump.
mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the on-premises data host, invoke SQL*Plus and log in to the PDB as the
SYSTEM user..
sqlplus system@pdb_servicename
Enter password: enter the password for the SYSTEM user
d. On the on-premises database host, set all tablespaces that will be transported
(the transportable set) to READ ONLY mode.
SQL> ALTER TABLESPACE fsindex READ ONLY;
SQL> ALTER TABLESPACE fsdata READ ONLY;
b. Invoke RMAN and log in as a user that has been granted the SYSDBA or SYSBACKUP
privilege.
rman target username@pdb_servicename
e. Optionally, navigate to the directory you specified in the BACKUP command to view
the files that were created.
cd /u01/app/oracle/admin/orcl/rman_transdest
$ ls
fs_tbs.bck fs_tbs.dmp
3. Use a secure copy utility to transfer the backup set, including the Data Pump export
dump file, to the Database service compute node.
a. On the Database service compute node, create a directory for the backup set and
dump file.
mkdir /tmp/from_onprem
b. Before using the scp command to copy files, make sure the SSH private key that
provides access to the Database service compute node is available on your on-
premises host.
c. On the on-premises database host, use the SCP utility to transfer the backup set
and the dump file to the Database service compute node.
scp -i private_key_file \
/u01/app/oracle/admin/orcl/rman_transdest/fs_tbs.bck \
oracle@IP_address_DBaaS_VM:/tmp/from_onprem
$ scp -i private_key_file \
/u01/app/oracle/admin/orcl/rman_transdest/fs_tbs.dmp \
oracle@IP_address_DBaaS_VM:/tmp/from_onprem
schemas.
a. On the Database service compute node, invoke SQL*Plus and log in to the PDB as
the SYSTEM user.
b. If the owners of the objects that will be imported do not exist in the database,
create them before performing the RESTORE.
SQL> CREATE USER fsowner
2 PROFILE default
3 IDENTIFIED BY fspass
4 TEMPORARY TABLESPACE temp
5 ACCOUNT UNLOCK;
6. On the Database service compute node, invoke RMAN and use the RESTORE command
with the foreignFileSpec subclause to restore the cross-platform backup.
a. Create an operating system directory for the Data Pump Dump file.
mkdir /tmp/from_onprem
b. Invoke RMAN and log in to the PDB as a user that has been granted the SYSDBA or
SYSBACKUP privilege.
rman target username@pdb_servicename
8. After verifying that the data has been imported successfully, you can delete the backup
set files that were transported from the on-premises host.
You can use this method to eliminate placing the tablespaces in READ ONLY mode, as required
by the Data Pump Transportable Tablespace method.
1. On the on-premises database host, invoke RMAN and create the transportable
tablespace set.
2. Use a secure copy utility to transfer the Data Pump Export dump file and the tablespace
datafiles to the Database service compute node.
3. On the Database service compute node, prepare the database for the tablespace import.
4. On the Database service compute node, invoke Data Pump Import and connect to the
database. Import the data into the database using the TRANSPORT_DATAFILES option.
5. After verifying that the data has been imported successfully, you can delete the dump
file.
1. On the on-premises database host, invoke RMAN and create the transportable
tablespace set.
a. On the on-premises database host, create an operating system directory for the
datafiles.
mkdir /u01/app/oracle/admin/orcl/rman_transdest
b. On the on-premises data host, create an operating system directory for the RMAN
auxiliary instance files.
mkdir /u01/app/oracle/admin/orcl/rman_auxdest
c. Invoke RMAN and log in as the SYSTEM user. Enter the password for the SYSTEM
user when prompted.
rman target system
2. Use a secure copy utility to transfer the Data Pump Export dump file and the tablespace
datafiles to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate
location based on the size of the file that will be transferred.
a. On the Database service compute node, create a directory for the dump file.
mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
b. Before using the scp command to copy files, make sure the SSH private key that
provides access to the Database service compute node is available on your on-
premises host.
c. On the on-premises database host, use the SCP utility to transfer the dump file
and all datafiles of the transportable set to the Database service compute node.
scp -i private_key_file \
/u01/app/oracle/admin/orcl/rman_transdest/dmpfile.dmp \
oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
$ scp -i private_key_file \
/u01/app/oracle/admin/orcl/rman_transdest/fsdata01.dbf \
oracle@IP_address_DBaaS_VM:/u02/app/oracle/oradata/ORCL
$ scp -i private_key_file \
/u01/app/oracle/admin/orcl/rman_transdest/fsindex01.dbf \
oracle@IP_address_DBaaS_VM:/u02/app/oracle/oradata/ORCL
3. On the Database service compute node, prepare the database for the tablespace import.
a. On the Database service compute node, invoke SQL*Plus and log in to the
database as the SYSTEM user.
b. Create a directory object in the Database service database.
SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/dpdump/from_onprem';
c. If the owners of the objects that will be imported do not exist in the database,
create them before performing the import. The transportable tablespace mode of
import does not create the users.
SQL> CREATE USER fsowner
2 PROFILE default
3 IDENTIFIED BY fspass
4 TEMPORARY TABLESPACE temp
5 ACCOUNT UNLOCK;
4. On the Database service compute node, invoke Data Pump Import and connect to the
database.
Import the data into the database using the TRANSPORT_DATAFILES option.
impdp system DIRECTORY=dp_from_onprem DUMPFILE='dmpfile.dmp' \
TRANSPORT_DATAFILES='/u02/app/oracle/oradata/ORCL/fsdata01.dbf', \
'/u02/app/oracle/oradata/ORCL/fsindex01.dbf'
5. After verifying that the data has been imported successfully, you can delete the
dmpfile.dmp dump file.
This method is similar to the Data Pump Transportable Tablespace method, with the addition
of the RMAN CONVERT command to enable transport between platforms with different
endianness. Query V$TRANSPORTABLE_PLATFORM to determine if the on-premises database
platform supports cross-platform tablespace transport and to determine the endian format of
the platform. The Database service platform is little-endian format.
1. On the on-premises database host, prepare the database for the Data Pump
transportable tablespace export.
2. On the on-premises database host, invoke Data Pump Export to perform the
transportable tablespace export.
3. On the on-premises database host, invoke RMAN and use the CONVERT TABLESPACE
command to convert the tablespace datafile to the Database service platform format.
Refer to the Oracle Database Backup and Recovery Reference for more information on
the CONVERT command.
4. Use a secure copy utility to transfer the Data Pump Export dump file and the converted
tablespace datafiles to the Database service compute node.
5. Set the on-premises tablespaces back to READ WRITE.
6. On the Database service compute node, prepare the database for the tablespace import.
7. On the Database service compute node, invoke Data Pump Import and connect to the
database.
8. On the Database service compute node, set the tablespaces in the database to READ
WRITE mode.
9. After verifying that the data has been imported successfully, you can delete the dump
file.
1. On the on-premises database host, prepare the database for the Data Pump
transportable tablespace export.
a. On the on-premises database host, create a directory in the operating system to
use for the on-premises export.
mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud
b. On the on-premises database host, invoke SQL*Plus and log in to the on-premises
database as the SYSTEM user.
sqlplus system
Enter password: <enter the password for the SYSTEM user>
d. On the on-premises database host, set all tablespaces that will be transported
(the transportable set) to READ ONLY mode.
SQL> ALTER TABLESPACE fsindex READ ONLY;
Tablespace altered.
3. On the on-premises database host, invoke RMAN and use the CONVERT TABLESPACE
command to convert the tablespace datafile to the Database service platform format.
a. Invoke RMAN.
rman target /
b. Execute the RMAN CONVERT TABLESPACE command to convert the datafiles and
store the converted files in a temporary location on the on-premises database
host.
RMAN> CONVERT TABLESPACE fsdata, fsindex
2> TO PLATFORM 'Linux x86 64-bit'
3> FORMAT '/tmp/%U ';
…
input datafile file number=00006 name=/u01/app/oracle/oradata/orcl/fsdata01.dbf
converted datafile=/tmp/data_D-ORCL_I-1410251631_TS-FSDATA_FNO-6_0aqc9un3
…
input datafile file number=00007 name=/u01/app/oracle/oradata/orcl/fsindex01.dbf
converted datafile=/tmp/data_D-ORCL_I-1410251631_TS-FSINDEX_FNO-7_0bqc9un6
…
c. Take note of the names of the converted files. You will copy these files to the
Database service compute node in the next step.
d. Exit RMAN.
4. Use a secure copy utility to transfer the Data Pump Export dump file and the converted
tablespace datafiles to the Database service compute node.
In this example the dump file is copied to the /u01 directory. Choose the appropriate
location based on the size of the file that will be transferred.
a. On the Databaseservice compute node, create a directory for the dump file.
mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem
b. Before using the scp command to copy files, make sure the SSH private key that
provides access to the Database service compute node is available on your on-
premises host.
c. On the on-premises database host, use the scp utility to transfer the dump file
and all data files of the transportable set to the Database service compute node.
scp -i private_key_file \
/u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp \
oracle@IP_address_DBaaS_VM:/u01/app/oracle/admin/ORCL/dpdump/from_onprem
$ scp -i private_key_file \
/tmp/data_D-ORCL_I-1410251631_TS-FSDATA_FNO-6_0aqc9un3 \
oracle@IP_address_DBaaS_VM:/u02/app/oracle/oradata/ORCL/fsdata01.dbf
$ scp -i private_key_file \
/tmp/data_D-ORCL_I-1410251631_TS-FSINDEX_FNO-7_0bqc9un6 \
oracle@IP_address_DBaaS_VM:/u02/app/oracle/oradata/ORCL/fsindex01.dbf
a. On the Database service compute node, invoke SQL*Plus and log in to the
database as the SYSTEM user.
b. Create a directory object in the Database service database.
SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/dpdump/from_onprem';
c. If the owners of the objects that will be imported do not exist in the database,
create them before performing the import. The transportable tablespace mode of
import does not create the users.
SQL> CREATE USER fsowner
2 PROFILE default
3 IDENTIFIED BY fspass
4 TEMPORARY TABLESPACE temp
5 ACCOUNT UNLOCK;
7. On the Database service compute node, invoke Data Pump Import and connect to the
database.
Import the data into the Database service database using the TRANSPORT_DATAFILES
option
impdp system DIRECTORY=dp_from_onprem \
TRANSPORT_DATAFILES='/u02/app/oracle/oradata/ORCL/fsdata01.dbf', \
'/u02/app/oracle/oradata/ORCL/fsindex01.dbf'
8. On the Database service compute node, set the tablespaces in the database to READ
WRITE mode.
a. Invoke SQL*Plus and log in as the SYSTEM user.
b. Set the FSDATA and FSINDEX tablespaces to READ WRITE mode.
SQL> ALTER TABLESPACE fsdata READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE fsindex READ WRITE;
Tablespace altered.
Note
Prerequisites
l The source database name, database unique name, listener port, service name,
database home patch level, and the password for SYS.
l A copy of the sqlpatch directory from the source database home. This is required for
rollback in case the target DB System does not include these patches.
l If the source database is configured with Transparent Data Encryption (TDE), you'll
need a backup of the wallet and the wallet password to allow duplication of a database
with encrypted data.
l A target DB System that supports the same database edition as the source database
edition. When you launch a DB System, an initial database is created on it. If necessary,
you can delete that database and create a new one by using the dbcli command line
interface. For more information creating a DB System, see Managing DB Systems. For
information about creating a database with the CLI, see Database Commands.
l The target database name, database unique name, auxiliary service name, and
database home patch level.
l A free TCP port in the target database to setup the auxiliary instance.
1. Make sure the source DB System is reachable from the target DB System. You should be
able to SSH between the two hosts.
2. On the target host, use the TNSPING utility to make sure the source host listener port
works. For example:
tnsping <source host>:1521
3. On the target host, use Easy Connect to verify the connection to the source database:
<host>:<port>/<servicename>
For example:
sqlplus [email protected]:1521/proddb
7. Make sure the compatibility parameters in the source database are set to at least
11.2.0.4.0 for an 11.2.0.4 database and at least 12.1.0.2.0 for a 12.1.0.2 database.
2. Log in as opc and then sudo to the root user. Use sudo su - with a hyphen to invoke the
root user's profile, which will set the PATH to the dbcli directory
(/opt/oracle/dcs/bin).
login as: opc
3. Use the dbcli create-dbstorage to set up directories for DATA, RECO, and REDO storage.
The following example creates 10GB of ACFS storage for the tdetest database.
[root@dbsys ~]# dbcli create-dbstorage --dbname tdetest --dataSize 10 --dbstorage ACFS
Note
4. Use the dbcli list-dbstorages command to list the storage ID. You'll need the ID for the
next step.
[root@dbsys ~]# dbcli list-dbstorages
ID Type DBUnique Name Status
---------------------------------------- ------ -------------------- ----------
9dcdfb8e-e589-4d5f-861a-e5ba981616ed Acfs tdetest Configured
5. Use the dbcli describe-dbstorage command with the storage ID from the previous step
to list the DATA, RECO and REDO locations.
[root@dbsys ~]# dbcli describe-dbstorage --id 9dcdfb8e-e589-4d5f-861a-e5ba981616ed
DBStorage details
----------------------------------------------------------------
ID: 9dcdfb8e-e589-4d5f-861a-e5ba981616ed
DB Name: tdetest
DBUnique Name: tdetest
DB Resource ID:
Note the locations. You'll use them later to set the db_create_file_dest, db_create_
online_log_dest, and db_recovery_file_dest parameters for the database.
Choosing an ORACLE_HOME
Decide which ORACLE_HOME to use for the database restore and then switch to that home
with the correct ORACLE_BASE, ORACLE_HOME, and PATH settings.
To get a list of existing ORACLE_HOMEs, use the dbcli list-dbhomes command. To create a
new ORACLE_HOME, use the dbcli create-dbhome command.
Skip this section if the source database is not configured with TDE.
3. Copy the ewallet.p12 file from the source database to the directory you created in the
previous step.
4. On the target host, make sure that $ORACLE_HOME/network/admin/sqlnet.ora
contains the following line:
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=
(DIRECTORY=/opt/oracle/dcs/commonstore/wallets/tde/$ORACLE_UNQNAME)))
Add the line if it doesn't exist in the file. (The line might not be there if this is a new
home and no database has been created yet on this host.)
5. Create the autologin wallet from the password-based wallet to allow auto-open of the
wallet during restore and recovery operations.
For version 12c, use the ADMINISTER KEY MANAGEMENT command:
$cat create_autologin_12.sh
#!/bin/sh
if [ $# -lt 2 ]; then
echo "Usage: $0 <dbuniquename> <remotewalletlocation>"
exit 1;
fi
mkdir /opt/oracle/dcs/commonstore/wallets/tde/$1
cp $2/ewallet.p12* /opt/oracle/dcs/commonstore/wallets/tde/$1
rm -f autokey.ora
echo "db_name=$1" > autokey.ora
autokeystoreLog="autologinKeystore_`date +%Y%m%d_%H%M%S_%N`.log"
echo "Enter Keystore Password:"
read -s keystorePassword
echo "Creating AutoLoginKeystore -> "
sqlplus "/as sysdba" <<EOF
spool $autokeystoreLog
set echo on
startup nomount pfile=autokey.ora
ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE
FROM KEYSTORE '/opt/oracle/dcs/commonstore/wallets/tde/$1' -- Keystore location
IDENTIFIED BY "$keystorePassword";
shutdown immediate;
EOF
Set up the static listener for the auxiliary instance for RMAN duplication.
LISTENER_aux_<db_unique_name>=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=<hostname> or <ipaddress>)(PORT=<Available TCP Port>))
)
)
SID_LIST_LISTENER_aux_<db_unique_name>=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=<auxServiceName with domain>)
(ORACLE_HOME=<Oraclehome for target database>)
(SID_NAME=<database name>))
)
2. Make sure the port specified in (PORT=<Available TCP Port>) is open in the
DB System's iptables and in the DB System's cloud network Security List.
1. Set the following environment variables for RMAN and SQL Plus sessions for the
database:
ORACLE_HOME=<path of Oracle Home where the database is to be restored>
ORACLE_SID=<database name>
ORACLE_UNQNAME=<db_unique_name in lower case>
NLS_DATE_FORMAT="mm/dd/yyyy hh24:mi:ss"
3. Create an init.ora file with the minimal required parameters as described in Creating
an Initialization Parameter File and Starting the Auxiliary Instance and use it for the
auxiliary instance.
4. Start the auxiliary instance in nomount mode:
startup nomount
5. Run the following commands to duplicate the database. Note that the example below
uses variables to indicate the values to be specified:
3. Use the following command to verify that the password file was restored or created for
a new database.
ls -ltr $ORACLE_HOME/dbs/orapw<$ORACLE_SID>
If the file does not exist, create it using the orapwd command.
orapwd file=<$ORACLE_HOME/dbs/orapw<$ORACLE_SID>> password=<sys password>
4. Use the following command to verify that the restored database is open in read write
mode.
select open_mode from v$database;
Read write mode is required to register the database later. Any PDBs must also be in
read write mode.
5. From oracle home on the migrated database host, use the following command verify the
connection to SYS.
conn sys/<Password>@<ServiceName> as sysdba
This connection is required to register the database later. Fix any connection issues
before continuing.
6. Copy the folder $ORACLE_HOME/sqlpatch from source database to the target database.
This will enable the dbcli register-database command to rollback any conflicting
patches.
Note
7. Use the following SQL*Plus command to make sure the database is using the spfile.
SHOW PARAMETERS SPFILE
The dbcli register-database command registers the migrated database to the dcs-agent so it
can be managed by the dcs-agent stack.
Note
As the root user, use the dbcli register-database command to register the database on
the DB System, for example:
[root@dbsys ~]# dbcli register-database --dbclass OLTP --dbshape odb1 --servicename crmdb.example.com --
syspassword
Password for SYS:
{
"jobId" : "317b430f-ad5f-42ae-bb07-13f053d266e2",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 08, 2016 05:55:49 AM EDT",
"description" : "Database service registration with db service name: crmdb.example.com",
"updatedTime" : "August 08, 2016 05:55:49 AM EDT"
}
For version 11.2 databases, the sqlpatch application is not automated, so any one-off patches
applied to the source database that are not part of the installed PSU must be rolled back
manually in the target database. After registering the database, execute the catbundle.sql
script and then the postinstall.sql script with the corresponding PSU patch (or the overlay
patch on top of the PSU patch), as described below.
1. On the DB System, use the dbcli list-dbhomes command to find the PSU patch
number for the version 11.2 database home. In the following sample command output,
the PSU patch number is the second number in the DB Version column:
[root@dbsys ~]# dbcli list-dbhomes
ID Name DB Version
Home Location Status
------------------------------------ ----------------- ------------------------------------- --
--------------------------------------- ----------
(The first patch number, 23054319 in the example above, is for the OCW component in
the database home.)
2. Find the overlay patch, if any, by using the lsinventory command. In the following
example, patch number 24460960 is the overlay patch on top of the 23054359 PSU
patch.
$ $ORACLE_HOME/OPatch/opatch lsinventory
...
Installed Top-level Products (1):
4. Apply the sqlpatch, using the overlay patch number from the previous step, for
example:
Note
If you would like to manage the database backup with the dbcli command line interface, you
can associate a new or existing backup configuration with the migrated database when you
register it or after you register it. A backup configuration defines the backup destination and
recovery window for the database. As the root user, use the following commands to create,
list, and display backup configurations:
l dbcli update-backupconfig
l dbcli list-backupconfigs
l dbcli describe-backupconfig
After the database is migrated and registered on the DB System, use the following checklist to
verify the results of the migration and perform any post-migration customizations.
3. Check for the following external references in the database and update them if
necessary:
l External tables: If the source database uses external tables, back up that data
and migrate it to the target host.
l Directories: Customize the default directories as needed for the migrated
database.
l Database links: Make sure all the required TNS entries are updated in the
tnsnames.ora file in ORACLE_HOME.
l Email and URLs: Make sure any email addresses and URLs used in the database
are still accessible from the DB System.
l Scheduled jobs: Review the jobs scheduled in source database and schedule
similar jobs as needed in the migrated database.
4. If you associated a backup configuration when you registered the database, run a test
back up using the dbcli create-backup command.
5. Verify that patches have been applied to all PDBs if the migrated database contains CDB
and PDBs.
6. Validate the database performance by using Database Replay and SQL Performance
Analyzer for SQL. For more information, see the Database Testing Guide.
In this method, you use SQL INSERT statements to load the data into your cloud database.
1. Launch SQL Developer, connect to your on-premises database and create a cart
containing the objects you want to migrate.
2. In SQL Developer, click the Export Cart icon and select “Insert” in the Format menu.
3. In SQL Developer, open a connection to the Oracle Database 12c database in the
Database service and execute the generated script to create the database objects.
4. In SQL Developer, open a connection to the Oracle Database 12c database in the
Database service and run the generated script to create the objects and load the data.
In this method, you use SQL*Loader to load the data into your cloud database.
To migrate selected objects to an Oracle Database 12c database in the Database service
deployment using SQL Developer and SQL*Loader, you perform these tasks:
1. Launch SQL Developer, connect to your on-premises database and create a cart
containing the objects you want to load into your cloud database.
2. In SQL Developer, click the Export Cart icon and select “loader” in the Format menu.
3. In SQL Developer, open a connection to the Oracle Database 12c database on the
Database service and execute the generated script to create the database objects.
4. Use a secure copy utility to transfer the SQL*Loader control files and the SQL*Loader
data files to the the Database service compute node.
5. On the the Database service compute node, invoke SQL*Loader to load the data using
the SQL*Loader control files and data files for each object.
Unplugging/Plugging a PDB
You can use this method only if the on-premises platform is little endian, and the on-premises
database and the Oracle Cloud Infrastructure Database service database have compatible
database character sets and national character sets.
You can use the unplug/plug method to migrate an Oracle Database 12c PDB to a PDB in an
Oracle Database 12c database on a Database service database deployment.
To migrate an Oracle Database 12c PDB to a PDB in the Oracle Database 12c database on an
Oracle Cloud Infrastructure Database service database deployment using the plug/unplug
method, you perform these tasks:
1. On the on-premises database host, invoke SQL*Plus and close the on-premises PDB.
2. On the on-premises database host, execute the ALTER PLUGGABLE DATABASE UNPLUG
command to generate an XML file containing the list of datafiles that will be plugged in
to the database on the Database service.
3. Use a secure copy utility to transfer the XML file and the datafiles to the
Databaseservice compute node.
4. On the Database service compute node, invoke SQL*Plus and execute the CREATE
PLUGGABLE DATABASE command to plug the database into the CDB.
5. On the Database service compute node, open the new PDB by executing the ALTER
PLUGGABLE DATABASE OPEN command.
For more information, see "Creating a PDB by Plugging an Unplugged PDB into a CDB" in
Oracle Database Administrator's Guide for Release 12.2 or 12.1.
Unplugging/Plugging Non-CDB
You can use this method only if the on-premises platform is little endian, and the on-premises
database and the Oracle Cloud Infrastructure Database database have compatible database
character sets and national character sets.
You can use the unplug/plug method to migrate an Oracle Database 12c non-CDB database to
a PDB in an Oracle Database 12c database on a Database service database deployment. This
method provides a way to consolidate several non-CDB databases into a single Oracle
Database 12c multitenant database on the Database service.
To migrate an Oracle Database 12c non-CDB database to the Oracle Database 12c database on
a Database service deployment using the plug/unplug method, you perform these tasks:
1. On the on-premises database host, invoke SQL*Plus and set the on-premises database
to READ ONLY mode.
For more information, see "Creating a PDB Using a Non-CDB" in Oracle Database
Administrator's Guide for Release 12.2 or 12.1.
Large Compute clusters of thousands of instances can use File Storage Service for high-
performance shared storage. Storage provisioning is fully managed and automatic as your
use scales from a single byte to exabytes without upfront provisioning. You have redundant
storage for resilient data protection.
File Storage Service supports the Network File System version 3.0 (NFSv3) protocol. The
service supports the Network Lock Manager (NLM) protocol for file locking functionality.
Use File Storage Service when your application or workload includes big data and analytics,
media processing, or content management, and you require Portable Operating System
Interface (POSIX)-compliant file system access semantics and concurrently-accessible
storage. File Storage Service is designed to meet the needs of applications and users that
need an enterprise file system across a wide range of use cases, including the following:
l Enterprise applications that need shared files, such as Oracle E-Business Suite (EBS)
l Any Oracle application that uses NFSv3 protocol, including EBS, and others
l Analytic applications and Hadoop environments, where you currently use a local NFS
file system
MOUNT TARGET
An NFS endpoint that lives in a subnet of your choice and is highly available. It provides
the IP address or DNS name that is used in the mount command when connecting NFS
clients to File Storage Service. By default, you can create two mount targets per account
per availability domain, but you can request an increase. See Service Limits for a list of
applicable limits and instructions for requesting a limit increase.
EXPORT PATH
A path that is specified during file system creation, appended to the mount target
IP address, and used to mount the file system. File Storage System adds an export that
pairs the file system's Oracle Cloud Identifier (OCID) and path.
Note:
SUBNETS
Subdivisions you define in a VCN (for example, 10.0.0.0/24 and 10.0.1.0/24). Subnets
contain virtual network interface cards (VNICs), which attach to instances. Each subnet
exists in a single availability domain and consists of a contiguous range of IP addresses
that do not overlap with other subnets in the VCN. Each mount target has an address on a
subnet of your choice. For more information about subnets, see VCNs and Subnets in the
Oracle Cloud Infrastructure Networking documentation.
SECURITY LISTS
Virtual firewall rules for your VCN. Your VCN comes with a default security list, and you
can add more. These security lists provide ingress and egress rules that specify the types
of traffic allowed in and out of the instances. You can choose whether a given rule is
stateful or stateless. Security list rules must be set up so that clients can connect to file
system mount targets. For more information about security lists, see Security Lists in the
the Oracle Cloud Infrastructure Networking documentation.
SNAPSHOTS
Snapshots provide a consistent, point-in-time view of your file system, and you can take
as many snapshots as you need. You pay only for the storage used by your data and
metadata, including storage capacity used by snapshots. Each snapshot reflects only data
that changed from the previous snapshot
Encryption
File Storage Service uses AES-128 encryption to encrypt all file systems by default.
Encryption happens at the file level. Data and metadata are encrypted at rest rather than
while in transit. You cannot turn off encryption.
File Storage Service's key management system relies on one master key for each availability
domain which rotates periodically. The service also uses one file system master key for each
file system which is generated when it creates the file system. Lastly, the service generates a
file key when a file is added to the file system.
Data Transfers
FastConnect offers you the ability to accelerate data transfers. You can leverage the
integration between FastConnect and File Storage Service to perform initial data migration,
workflow data transfers for large files, and disaster recovery scenarios between two regions,
among other things.
Oracle Cloud Infrastructure users require resource permissions to create, delete, and manage
resources. Without the appropriate IAM permissions, you cannot export a file system through
a mount target. Until a file system has been exported, Compute instances can't mount it. For
more information about creating an IAM policy, see Let Users Create, Manage, and Delete File
Systems.
If you have successfully exported a file system on a subnet, then you use Networking security
lists to control traffic to and from the subnet and, therefore, the mount target. Security lists
act as a virtual firewall, allowing only the network traffic you specify to and from the IP
addresses and port ranges configured in your ingress and egress rules. The security list you
create for the subnet lets hosts send and receive packets and mount the file system. If you
have firewalls on individual instances, use FastConnect, or use a virtual private network
(VPN), the settings for those might also impact security at the networking layer. For more
information about creating a security list for File Storage Service, see Managing File Systems.
When you create a mount target for a file system, you can share it among local bare metal
and virtual Compute resources within a region. The service runs locally within each
availability domain. When you create a file system or mount target, you specify the
availability domain it is created in. Within an availability domain, File Storage Service uses
synchronous replication and high availability failover to keep your data safe and available.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Users Create, Manage, and Delete File Systems allows
users to create file systems.
If you're new to policies, see Getting Started with Policies and Common Policies.
Create stateful ingress rules for TCP traffic associated with the following:
1. Open the Console, and then, in the left-hand navigation, in the List Scope section,
under Compartment, select the compartment that contains the subnet used by your
file system.
2. Click Networking, and then click Virtual Cloud Networks.
3. Find the cloud network you associated with your file system.
4. On the details page for the cloud network, click security lists, and then click Create
Security List.
5. Enter the following:
a. Create in Compartment: The compartment where you want to create the
security list, if different from the compartment you're currently working in.
b. Security List Name: A friendly name for the security list. The name doesn't
have to be unique, and it cannot be changed later in the Console (but you can
change it with the API).
6. Add the following ingress rule for access of NFS and NLM traffic:
a. Specify that it's a stateful rule by leaving the checkbox clear. (For more
information about stateful and stateless rules, see Stateful vs. Stateless Rules).
By default, rules are stateful unless you specify otherwise.
b. To allow traffic from the subnet of the cloud network, click Source CIDR, and
then enter the CIDR block for the subnet.
c. Click IP Protocol, and then click TCP.
d. For ingress of NFS and NLM traffic, click Destination Port Range, and then
enter 2048-2050.
7. Click + Add Rule to add additional rules. Make sure to delete any partially completed
rules by clicking the X next to the rule.
8. Repeat step 6 to create a second ingress rule for NFS and NLM traffic with a Source
Port Range of 2048-2050.
9. Repeat step 6 to create a third ingress rule allowing traffic to a Destination Port
Range of 111 for the NFS rpcbind utility.
10. Repeat step 6 to create a fourth ingress rule allowing traffic to a Source Port Range of
111 for the NFS rpcbind utility.
11. When you're done, click Create Security List.
6. Under Mount Target Information, click Name, and then type a name for the mount
target.
7. Click Virtual Cloud Network, and then select the VCN where you want to create the
mount target.
8. Click Subnet, and then select a subnet for the mount target.
9. Optionally, to assign an IP address to the mount target, click IP Address and then
specify an unused, local, private IP address between 10.0.0.2 and 10.0.0.254.
10. Optionally, click Hostname and then specify a hostname you want to assign to the
mount target.
11. If you want to change the default path for the file system, click Path Name and specify
a new path name, including the forward slash (/). For example, /fss. This specifies the
mount path to the file system (relative to the mount target’s IP address or hostname).
12. Click Create File System.
The File Storage Service typically creates the file system within seconds. You can add files to
a file system immediately after the service creates it.
l CreateFileSystem
l CreateMountTarget
You can perform most administrative tasks for your file systems and mount targets using the
Console or API. You can use the Console to list mount targets exporting a specific file system.
Use the API if you want to list all mount targets in a compartment.
To mount a file system to write files, use a command window on Ubuntu and Linux systems.
For more information, see Mounting File Systems.
For administrators: The policy in Let Users Create, Manage, and Delete File Systems allows
users to delete file systems.
If you're new to policies, see Getting Started with Policies and Common Policies.
1. Open the Console, and then, in the top-navigation bar, under Compartment, select a
compartment.
2. Click Storage, and then click File Systems.
3. To view information about a file system, find the file system, click the Actions icon (
), and then click View File System Details.
The Console displays metadata for the file system, mount targets associated with the
file system, and status for the file system and all mount targets.
l Open the Console, and then, in the top-navigation bar, under Compartment,
select a compartment.
l Click Storage, and then click File Systems.
l Find the file system you want to delete.
l Click the Actions icon ( ), and then click Delete.
To create a snapshot
1. Open the Console, and then, in the top-navigation bar, under Compartment, select a
compartment.
2. Click Storage, and then click File Systems.
3. Click the Actions icon ( ), and then click View File System Details.
4. In Resources, click Snapshots.
5. Click Create Snapshot.
6. Fill out the required information:
l Name: Enter a name for the snapshot. It must be unique among all other
snapshots for this file system. The name can't be changed.
7. Click Create Snapshot.
l CreateExport
l CreateMountTarget
l CreateSnapshot
l DeleteExport
l DeleteFileSystem
l DeleteMountTarget
l DeleteSnapshot
l GetExport
l GetExportSet
l GetFileSystem
l GetMountTarget
l GetSnapshot
l ListFileSystems
l ListExports
l ListExportSets
l ListMountTargets
l ListSnapshots
l UpdateExportSet
l UpdateFileSystem
l UpdateMountTarget
For administrators: The policy in Let Users Create, Manage, and Delete File Systems allows
users to connect to file systems.
If you're new to policies, see Getting Started with Policies and Common Policies.
Then, create a mount point. Create a mount point by typing the following, replacing
yourmountpoint with the local directory from which you want to access your File
Storage file system.
2. sudo mkdir -p /mnt/yourmountpoint
Mount the file system by typing the following. Replace 10.x.x.x: with the local subnet
IP address assigned to your mount target, fs-path with the export path you specified
when exporting the file system from the mount target, and yourmountpoint with the
path to the local mount point. The export path is the path to the file system (relative to
the mount target’s IP address or hostname). If you did not specify a path when you
created the mount target, then 10.x.x.x:/ represents the full extent of the mount
target.
3. sudo mount 10.x.x.x:/fs-path /mnt/yourmountpoint
5. Write a file to the file system by typing the following. Replace yourmountpoint with the
path to the local mount point and helloworld with your filename.
sudo touch /mnt/yourmountpoint/helloworld
6. Verify that you can view the file by typing the following. Replace yourmountpoint with
the path to the local mount point and helloworld with your filename.
cd /mnt/yourmountpoint
ls
Mount the file system by typing the following. Replace 10.x.x.x: with the local subnet
IP address assigned to your mount target, fs-path with the export path you specified
when exporting the file system from the mount target, and yourmountpoint with the
path to the local mount point. The export path is the path to the file system (relative to
the mount target’s IP address or hostname). If you did not specify a path when you
created the mount target, then 10.x.x.x:/ represents the full extent of the mount
target.
3. sudo mount 10.x.x.x:/fs-path /mnt/yourmountpoint
5. Write a file to the file system by typing the following. Replace yourmountpoint with the
path to the local mount point and helloworld with your filename.
sudo touch /mnt/yourmountpoint/helloworld
6. Verify that you can view the file by typing the following. Replace yourmountpoint with
the path to the local mount point and helloworld with your filename.
cd /mnt/yourmountpoint
ls
Troubleshooting
These topics cover some common issues you may run into and how to address them:
Customers can define how much free capacity is reported as available to the operating system
using the API .
To set the reported free capacity in the API, use the UpdateExportSet operation to update the
MaxFsStatBytes.
l Available bandwidth
We recommend that you use bare metal Compute instances because instance bandwidth
scales with the number of oCPU's. Bare metal Compute instances provide the greatest
bandwidth. Virtual machines (VMs) are bandwidth limited based upon the number of
CPU’s consumed. Single oCPU VM Compute instances provide the least bandwidth.
l Latency
Accessing File Storage Service from an instance running in the same availability domain
minimizes latency.
l Mount options
By not providing explicit values for mount options such as rsize and wsize, the client and
server can negotiate the window size for read and write operations that provide the best
performance.
Snapshots can also break symbolic links that point to a target outside the file system’s root
directory. This is because when you create a snapshot of a file system, it becomes available
as a subdirectory of the .snapshot directory.
To minimize these potential issues, use a relative path as the target path when creating a
symbolic link to a file in the network file system. Also, ensure that relative paths do not point
to a target path outside the File Storage Service root directory except when the target is on
the local machine. If you must use a symbolic link that points to a target path outside the file
system, use an absolute path starting with the client’s root directory.
For example:
Overview of IAM
Oracle Cloud Infrastructure Identity and Access Management (IAM) lets you control who has
access to your cloud resources. You can control what type of access a group of users have and
to which specific resources. This section gives you an overview of IAM components and an
example scenario to help you understand how they work together.
Components of IAM
IAM uses the components described in this section. To better understand how the components
fit together, see Example Scenario.
RESOURCE
The cloud objects that your company's employees create and use when interacting with
Oracle Cloud Infrastructure. For example: compute instances, block storage volumes,
virtual cloud networks (VCNs), subnets, route tables, etc.
USER
An individual employee or system that needs to manage or use your company's Oracle
Cloud Infrastructure resources. Users might need to launch instances, manage remote
disks, work with your virtual cloud network, etc. End users of your application are not
typically IAM users. Users have one or more IAM credentials (see User Credentials).
GROUP
A collection of users who all need the same type of access to a particular set of resources
or compartment.
COMPARTMENT
TENANCY
The root compartment that contains all of your organization's Oracle Cloud Infrastructure
resources. Oracle automatically creates your company's tenancy for you. Directly within
the tenancy are your IAM entities (users, groups, compartments, and some policies; you
can also put policies into compartments inside the tenancy). You place the other types of
cloud resources (e.g., instances, virtual networks, block storage volumes, etc.) inside the
compartments that you create.
POLICY
A document that specifies who can access which resources, and how. Access is granted at
the group and compartment level, which means you can write a policy that gives a group a
specific type of access within a specific compartment, or to the tenancy itself. If you give
a group access to the tenancy, the group automatically gets the same type of access to all
the compartments inside the tenancy. For more information, see Example Scenario and
How Policies Work. The word "policy" is used by people in different ways: to mean an
individual statement written in the policy language; to mean a collection of statements in
a single, named "policy" document (which has an Oracle Cloud ID (OCID) assigned to it);
and to mean the overall body of policies your organization uses to control access to
resources.
HOME REGION
The region where your IAM resources reside. All IAM resources are global and available
across all regions, but the master set of definitions reside in a single region, the home
region. You must make changes to your IAM resources in your home region. The changes
will be automatically propagated to all regions. For more information, see Managing
Regions.
l Audit Service
l Core Services (includes Networking, Compute, and Block Volume)
l Database
l IAM
l Load Balancing
l Object Storage
Your tenancy also automatically has a policy that gives the Administrators group access to all
of the Oracle Cloud Infrastructure API operations and all of the cloud resources in your
tenancy. You can neither change nor delete this policy. Any other users you put into the
Administrators group will have full access to all of the services. This means they can create
and manage IAM users, groups, policies, and compartments. And they can create and manage
the cloud resources such as virtual cloud networks (VCNs), instances, block storage volumes,
and any other new types of Oracle Cloud Infrastructure resources that become available in the
future.
Example Scenario
The goal of this scenario is to show how the different IAM components work together, and
basic features of policies.
In this scenario Acme Company has two teams that will be using Oracle Cloud Infrastructure
resources for infrastructure: Project A and Project B. In reality, your company may have
many more.
Acme Company plans to use a single virtual cloud network (VCN) for both teams, and wants a
network administrator to manage the VCN.
Acme Company also wants the Project A team and Project B team to each have their own set
of instances and block storage volumes. The Project A team shouldn't be able to use the
Project B team's instances, and vice versa. These two teams also shouldn't be allowed to
change anything about the VCN set up by the network administrator. Acme Company wants
each team to have administrators for that team's resources. The administrators for the
Project A team can decide who can use the Project A cloud resources, and how. Same for the
Project B team.
Acme Company signs up to use Oracle Cloud Infrastructure and tells Oracle that an employee
named Wenpei will be the default administrator. In response, Oracle:
Wenpei next creates several groups and users (see the following diagram). She:
l Creates groups called NetworkAdmins, A-Admins, and B-Admins (these last two are for
Project A and Project B within the company)
l Creates a user called Alex and puts him in the Administrators group.
l Leaves the new groups empty.
To learn how to create groups, see Working with Groups. To learn how to create users and
put them in groups, see Working with Users.
Wenpei next creates compartments to group resources together (see the following diagram).
She:
l Creates a compartment called Networks to control access to the Acme Company's VCN,
subnets, IPSec VPN, and other components from Networking.
l Creates a compartment called Project-A to organize Project A team's cloud resources
and control access to them.
l Creates a compartment called Project-B to organize Project B team's cloud resources
and control access to them.
Wenpei then creates a policy to give the administrators for each compartment their required
level of access. She attaches the policy to the tenancy, which means that only users with
access to manage policies in the tenancy can later update or delete the policy. In this
scenario, that is only the Administrators group. The policy includes multiple statements that:
l Give the NetworkAdmins group access to manage networks and instances (for the
purposes of easily testing the network) in the Networks compartment
l Give both the A-Admins and B-Admins groups access to use the networks in the
Networks compartment (so they can launch instances into the network).
l Give the A-Admins group access to manage all resources in the Project-A compartment.
l Give the B-Admins group access to manage all resources in the Project-B compartment.
Here's what that policy looks like (notice it has multiple statements in it):
Allow group NetworkAdmins to manage virtual-network-family in compartment Networks
Allow group NetworkAdmins to manage instance-family in compartment Networks
Notice the difference in the verbs (manage, use, inspect), as well as the resources
(virtual-network-family, instance-family, all-resources). For more information
about them, see Verbs and Resource-Types.To learn how to create policies, see To create a
policy.
Acme Company wants to let the administrators of the Project-A and Project-B compartments
decide which users can use the resources in those compartments. So Wenpei creates two
more groups: A-Users and B-Users. She then adds six more statements that give the
compartment admins the required access they need in order to add and remove users from
those groups:
Allow group A-Admins to use users in tenancy where target.group.name='A-Users'
Allow group A-Admins to use groups in tenancy where target.group.name='A-Users'
Notice that this policy doesn't let the project admins create new users or manage credentials
for the users. It lets them decide which existing users can be in the A-Users and B-Users
groups. The last two statements are necessary for A-Admins and B-Admins to list all the users
and groups, and confirm which users are in which groups.
At this point, Alex is in the Administrators group and now has access to create new users. So
he provisions users named Leslie, Jorge, and Cheri and places them in the NetworkAdmins, A-
Admins, and B-Admins groups, respectively. Alex also creates other users who will eventually
be put in the A-Users and B-Users groups by the admins for Project A and Project B.
Leslie (in the NetworkAdmins group) has access to manage virtual-network-family and
instance-family in the Networks compartment. She creates a virtual cloud network (VCN)
with a single subnet in that compartment. She also sets up an Internet gateway for the VCN,
and updates the VCN's route table to allow traffic via that gateway. To test the VCN's
connectivity to the on-premises network, she launches an instance in the subnet in the VCN.
As part of the launch request, she must specify which compartment the instance should reside
in. She specifies the Networks compartment, which is the only one she has access to. She
then confirms connectivity from the on-premises network to the VCN by logging in to the
instance via SSH from the on-premises network.
Leslie terminates her test instance and lets Jorge and Cheri know that the VCN is up and
running and ready to try out. She lets them know that their compartments are named Project-
A and Project-B respectively. For more information about setting up a cloud network, see
Overview of Networking. For information about launching instances into the network, see
Overview of the Compute Service.
Jorge and Cheri now need to set up their respective compartments. Each admin needs to do
the following:
Jorge and Cheri both launch instances into the subnet in the VCN, into their respective team's
compartments. They create and attach block volumes to the instances. Only the compartment
admins can launch/terminate instances or attach/detach block olumes in their respective
team's compartments.
But it's also important to note that Wenpei and Alex in the
Administrators group do have access to the resources inside
the compartments, because they have access to manage all
kinds of resources in the tenancy. Compartments inherit any
policies attached to their parent compartment (the tenancy),
so the Administrators access also applies to all
compartments within the tenancy.
Next, Jorge puts several of the users that Alex created into the A-Users group. Cheri does the
same for B-Users.
Then Jorge writes a policy that gives users the level of access they need in the Project-A
compartment.
Allow group A-Users to use instance-family in compartment Project-A
Allow group A-Users to use volume-family in compartment Project-A
Allow group A-Users to inspect virtual-network-family in compartment Networks
This lets them use existing instances (with attached block volumes) that the compartment
admins already launched in the compartment, and stop/start/reboot them. It does not let A-
Users create/delete or attach/detach any volumes. To give that ability, the policy would need
to include manage volume-family.
Jorge attaches this policy to the Project-A compartment. Anyone with the ability to manage
policies in the compartment can now modify or delete this policy. Right now, that is only the
A-Admins group (and the Administrators group, which can do anything throughout the
tenancy).
Cheri creates and attaches her own policy to the Project-B compartment, similar to Jorge's
policy:
Allow group B-Users to use instance-family in compartment Project-B
Allow group B-Users to use volume-family in compartment Project-B
Allow group B-Users to inspect virtual-network-family in compartment Networks
Now the A-Users and B-Users can work with the existing instances and attached volumes in
the Project-A and Project-B compartments, respectively. Here's what the layout looks like:
For more information about basic and advanced features of policies, see How Policies Work.
For examples of other typical policies your organization might use, see Common Policies.
This experience is different when you're viewing the lists of users, groups, and
compartments. Those reside in the tenancy itself (the root compartment), not in an individual
compartment.
As for policies, they can reside in either the tenancy or a compartment, depending on where
the policy is attached. Where it's attached controls who has access to modify or delete it. For
more information, see Policy Attachment.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
l Make sure you're familiar with the basic IAM components, and read through the
example scenario: Overview of IAM
l Think about how to organize your resources into compartments: See "Setting Up Your
Tenancy" in the Oracle Cloud Infrastructure Getting Started Guide
l Learn the basics of how policies work: How Policies Work
l Check out some typical policies: Common Policies
l Read the FAQs below
Policy FAQs
Which of the services within Oracle Cloud Infrastructure can I control access to
through policies?
All of them, including IAM itself. You can find specific details for writing policies for each
service in the Policy Reference.
l Change or reset their own Console password, and manage their own API signing keys
and Swift passwords (see Security Credentials).
l Get a list of all the compartments in the tenancy (root compartment), regardless of
whether the user has access to the contents of any of the compartments.
The second item above enables basic navigation within the Console for all users. If the user
tries to view the contents of a compartment they don't have access to, they'll receive an
error.
Why should I separate resources by compartment? Couldn't I just put all the
resources into one compartment and then use policies to control who has
access to what?
You could put all your resources into a single compartment and use policies to control access,
but then you would lose the benefits of measuring usage and billing by compartment, simple
policy administration at the compartment level, and clear separation of resources between
projects or business units.
l Enterprise companies typically have multiple users that need similar permissions, so
policies are designed to give access to groups of users, not individual users. A user
gains access by being in a group.
l Policies are designed to allow access; there's no explicit "deny" when you write a policy.
Overview of Policies
A policy is a document that specifies who can access which Oracle Cloud Infrastructure
resources that your company has, and how. A policy simply allows a group to work in certain
ways with specific types of resources in a particular compartment. If you're not familiar with
users, groups, or compartments, see Overview of IAM.
In general, here’s the process an IAM administrator in your organization needs to follow:
1. Define users, groups, and one or more compartments to hold the cloud resources for
your organization.
2. Create one or more policies, each written in the policy language. See Common Policies.
3. Place users into the appropriate groups depending on the compartments and resources
they need to work with.
4. Provide the users with the one-time passwords that they need in order to access the
Console and work with the compartments. For more information, see User Credentials.
After the administrator completes these steps, the users can access the Console, change their
one-time passwords, and work with specific cloud resources as stated in the policies.
Policy Basics
To govern control of your resources, your company will have at least one policy. Each policy
consists of one or more policy statements that follow this basic syntax:
Allow group <group_name> to <verb> <resource-type> in compartment <compartment_name>
Notice that the statements always begin with the word Allow. Policies only allow access; they
cannot deny it. Instead there's an implicit deny, which means by default, users can do nothing
and have to be granted access through policies. (There's one exception to this rule; see Can
users do anything without an administrator writing a policy for them?)
An administrator in your organization defines the groups and compartments in your tenancy.
Oracle defines the possible verbs and resource-types you can use in policies (see Verbs and
Resource-Types).
In some cases you'll want the policy to apply to the tenancy and not a compartment inside the
tenancy. In that case, change the end of the policy statement like so:
Allow group <group_name> to <verb> <resource-type> in tenancy
For information about how many policies you can have, see Service Limits.
A Few Examples
Let's say your administrator creates a group called HelpDesk whose job is to manage users
and their credentials. Here is a policy that enables that:
Allow group HelpDesk to manage users in tenancy
Notice that because users reside in the tenancy (the root compartment), the policy simply
states the word tenancy, without the word compartment in front of it.
Next, let's say you have a compartment called Project-A, and a group called A-Admins whose
job is to manage all of the Oracle Cloud Infrastructure resources in the compartment. Here's
an example policy that enables that:
Allow group A-Admins to manage all-resources in compartment Project-A
Be aware that the policy directly above includes the ability to write policies for that
compartment, which means A-Admins can control access to the compartment's resources. For
more information, see Policy Attachment.
If you wanted to limit A-Admins' access to only launching and managing compute instances
and block storage volumes (both the volumes and their backups) in the Project-A
compartment, but the network itself lives in the Networks compartment, then the policy could
instead be:
Allow group A-Admins to manage instance-family in compartment Project-A
The third statement with the virtual-network-family resource-type enables the instance
launch process, because the cloud network is involved. Specifically, the launch process
creates a new VNIC and attaches it to the subnet where the instance resides.
Typically you'll specify a group or compartment by name in the policy. However, you can use
the OCID instead. Just make sure to add "id" before the OCID. For example:
Allow group
id ocid1.group.oc1..aaaaaaaaqjihfhvxmumrl3isyrjw3n6c4rzwskaawuc7i5xwe6s7qmnsbc6a
Verbs
Oracle defines the possible verbs you can use in your policies. Here's a summary of the verbs,
from least amount of access to the most:
use Includes read plus the ability to work with existing resources Day-to-day
(the actions vary by resource type). Includes the ability to end users of
update the resource, except for resource-types where the resources
"update" operation has the same effective impact as the
"create" operation (e.g., UpdatePolicy,
UpdateSecurityList, etc.), in which case the "update"
ability is available only with the manage verb. In general, this
verb does not include the ability to create or delete that type
of resource.
The verb gives a certain general type of access (e.g., inspect lets you list and get
resources). When you then join that type of access with a particular resource-type in a policy
(e.g., Allow group XYZ to inspect compartments in the tenancy), then you give that
group access to a specific set of permissions and API operations (e.g., ListCompartments,
GetCompartment). For more examples, see Details for Verbs + Resource-Type Combinations.
The Policy Reference includes a similar table for each service, giving you a list of exactly
which API operations are covered for each combination of verb and resource-type.
Users: Access to both manage users and manage groups lets you do anything with users and
groups, including creating and deleting users and groups, and adding/removing users from
groups. To add/remove users from groups without access to creating and deleting users and
groups, only both use users and use groups are required. See Common Policies.
Policies: The ability to update a policy is available only with manage policies, not use
policies, because updating a policy is similar in effect to creating a new policy (you can
overwrite the existing policy statements). In addition, inspect policies lets you get the full
contents of the policies.
Object Storage objects: inspect objects lets you list all the objects in a bucket and do a
HEAD operation for a particular object. In comparison, read objects lets you download the
object itself.
Load Balancing resources: Be aware that inspect load-balancers lets you get all
information about your load balancers and related components (backend sets, etc.).
Networking resources:
Be aware that the inspect verb not only returns general information about the cloud
network's components (for example, the name and OCID of a security list, or of a route
table). It also includes the contents of the component (for example, the actual rules in the
security list, the routes in the route table, and so on).
Also, the following types of abilities are available only with the manage verb, not the use verb:
Resource-Types
Oracle also defines the resource-types you can use in your policies. First, there are individual
types. Each individual type represents a specific type of resource. For example, the vcns
resource-type is specifically for virtual cloud networks (VCNs).
To make policy writing easier, there are family types that include multiple individual
resource-types that are often managed together. For example, the virtual-network-family
type brings together a variety of types related to the management of VCNs (e.g., vcns,
subnets, route-tables, security-lists, etc.). If you need to write a more granular policy
that gives access to only an individual resource-type, you can. But you can also easily write a
policy to give access to a broader range of resources.
Note that there are other ways to make policies more granular, such as the ability to specify
conditions under which the access is granted. For more information, see Advanced Policy
Features.
Policy Inheritance
A basic feature of policies is the concept of inheritance: Compartments inherit any policies
that apply to their parent (i.e., the root compartment). The simplest example is the
Administrators group, which automatically comes with your tenancy (see The Administrators
Group and Policy): There's a built-in policy that enables the Administrators group to do
anything in the tenancy:
Allow group Administrators to manage all-resources in tenancy
Because of policy inheritance, the Administrators group can also do anything in any of the
compartments in the tenancy.
Policy Attachment
Another basic feature of policies is the concept of attachment. When you create a policy you
must attach it to a compartment (or the tenancy, which is the root compartment). Where
you attach it controls who can then modify it or delete it. If you attach it to the
tenancy (in other words, if the policy is in the tenancy), then anyone with access to manage
policies in the tenancy can then change or delete it. Typically that's the Administrators group
or any similar group you create and give broad access to. Anyone with access only to a child
compartment cannot modify or delete that policy.
If you instead attach the policy to a child compartment, then anyone with access to manage
the policies in that compartment can change or delete it. In practical terms, this means it's
easy to give compartment administrators (i.e., a group with access to manage all-
resources in the compartment) access to manage their own compartment's policies, without
giving them broader access to manage policies that reside in the tenancy. For an example that
uses this kind of compartment administrator design, see Example Scenario. (Recall that
because of policy inheritance, users with access to manage policies in the tenancy
automatically have the ability to manage policies in compartments inside the tenancy.)
When you write a policy, you can indicate directly in the statement which compartment it
applies to. Alternatively, you can indicate the intended compartment by attaching the policy to
that compartment. If you attach the policy, you don't need to specify the compartment in the
policy statement itself—you can omit the portion of the statement that says in compartment
COMPARTMENT_NAME (or in tenancy). This means you can write a general policy template that
you can then easily use with a number of compartments (existing ones or perhaps future ones
your organization will need later). For example, let's say you have a centralized
VolumeBackupAdmins group. You could have a policy that gives the group the type of access
they need to back up block storage volumes, and then attach the policy to each compartment.
In the future, if you need to create a new compartment, you simply attach the policy to it. The
policy ensures that volume backups can be created only in compartments (not the tenancy),
and only by the centralized VolumeBackupAdmins group
Allow group VolumeBackupAdmins to use volumes
The process of attaching the policy is easy (whether attaching to a compartment or the
tenancy): If you're using the Console, when you add the policy to IAM, simply make sure
you're in the desired compartment when you create the policy. If you're using the API, you
specify the OCID of the desired compartment (either the tenancy or other compartment) as
part of the request to create the policy.
If the policy does explicitly state a compartment, you'll get an error if you try to attach the
policy to a different compartment. Notice that attachment occurs during policy creation, which
means a policy can be attached to only one compartment. To learn how to attach a policy to a
compartment, see To create a policy.
It's possible that the definition of a verb or resource-type could change in the future. For
example, let's say that the virtual-network-family resource-type changes to include a new
kind of resource that's been added to Networking. By default, your policies automatically stay
current with any changes in service definition, so any policy you have that gives access to
The Policy Reference includes details of the specific resource-types for each service, and
which verb + resource-type combination gives access to which API operations. Here are links
to the specific sections:
Common Policies
This section includes some common policies you might want to use in your organization.
Where to create the policy: In the tenancy, because users reside in the tenancy.
Allow group HelpDesk to manage users in tenancy
l The operation to list IAM policies includes the contents of the policies themselves
l The list operations for Networking resource-types return all the information (e.g., the
contents of security lists and route tables)
l The operation to list instances requires the read verb instead of inspect, and the
contents include the user-provided metadata.
l The operation to view Audit service events requires the read verb instead of inspect.
Where to create the policy: In the tenancy. Because of the concept of policy inheritance,
auditors can then inspect both the tenancy and all compartments beneath it. Or you could
choose to give auditors access to only specific compartments if they don't need access to the
entire tenancy.
Allow group Auditors to inspect all-resources in tenancy
Where to create the policy: In the tenancy. Because of the concept of policy inheritance,
NetworkAdmins can then manage a cloud network in any compartment. To reduce the scope
of access to a particular compartment, specify that compartment instead of the tenancy.
Allow group NetworkAdmins to manage virtual-network-family in tenancy
Where to create the policy: In the tenancy. Because of the concept of policy inheritance,
NetworkAdmins can then manage load balancers in any compartment. To reduce the scope of
access to a particular compartment, specify that compartment instead of the tenancy.
Allow group NetworkAdmins to manage load-balancers in tenancy
If a particular group needs to update existing load balancers (e.g., modify the backend set)
but not create or delete them, use this statement:
Allow group LBUsers to use load-balancers in tenancy
Where to create the policy: The easiest approach is to put this policy in the tenancy. If you
want the admins of the individual compartments (ABC and XYZ) to have control over the
individual policy statements for their compartments, see Policy Attachment.
Allow group InstanceLaunchers to manage instance-family in compartment ABC
all the volumes and volume backups in all the compartments. The second statement is
required in order to attach/detach the volumes from instances.
Where to create the policy: In the tenancy, so that the access is easily granted to all
compartments by way of policy inheritance. To reduce the scope of access to just the
volumes/backups and instances in a particular compartment, specify that compartment
instead of the tenancy.
Allow group VolumeAdmins to manage volume-family in tenancy
Where to create the policy: In the tenancy, so that the access is easily granted to all
compartments by way of policy inheritance. To reduce the scope of access to just the volumes
and backups in a particular compartment, specify that compartment instead of the tenancy.
Allow group VolumeBackupAdmins to use volumes in tenancy
If the group will be using the Console, the following policy gives a better user experience:
Allow group VolumeBackupAdmins to use volumes in tenancy
The last two statements are not necessary in order to manage volume backups. However,
they enable the Console to display all the information about a particular volume.
Where to create the policy: In the tenancy, so that the ability to create, manage, or delete
a file system is easily granted to all compartments by way of policy inheritance. To reduce the
scope of these administrative functions to file systems in a particular compartment, specify
that compartment instead of the tenancy.
Allow group StorageAdmins to manage file-systems in tenancy
Where to create the policy: In the tenancy, so that the access is easily granted to all
compartments by way of policy inheritance. To reduce the scope of access to just the buckets
and objects in a particular compartment, specify that compartment instead of the tenancy.
Allow group ObjectAdmins to manage buckets in tenancy
Where to create the policy: The easiest approach is to put this policy in the tenancy. If you
want the admins of compartment ABC to have control over the policy, see Policy Attachment.
Allow group ObjectWriters to manage objects in compartment ABC where any {request.permission='OBJECT_
CREATE', request.permission='OBJECT_INSPECT'}
To limit access to a specific bucket in a particular compartment, add the condition where
target.bucket.name='<bucket_name>'. The following policy allows the user to list all the
buckets in a particular compartment, but they can only list the objects in and upload objects to
BucketA:
Allow group ObjectWriters to read buckets in compartment ABC
Allow group ObjectWriters to manage objects in compartment ABC where all {target.bucket.name='BucketA',
any {request.permission='OBJECT_CREATE', request.permission='OBJECT_INSPECT'}}
For more information about using conditions, see Advanced Policy Features.
Where to create the policy: The easiest approach is to put this policy in the tenancy. If you
want the admins of compartment ABC to have control over the policy, see Policy Attachment.
Allow group ObjectReaders to read buckets in compartment ABC
To limit access to a specific bucket in a particular compartment, add the condition where
target.bucket.name='<bucket_name>'. The following policy allows the user to list all
buckets in a particular compartment, but they can only read the objects in and download from
BucketA:
For more information about using conditions, see Advanced Policy Features.
Where to create the policy: In the tenancy, so that the access is easily granted to all
compartments by way of policy inheritance. To reduce the scope of access to just the
database systems in a particular compartment, specify that compartment instead of the
tenancy.
Allow group DatabaseAdmins to manage database-family in tenancy
The first two statements let GroupAdmins list all the users and groups in the tenancy, list
which users are in a particular group, and list what groups a particular user is in.
The last two statements together let GroupAdmins change a group's membership. The
condition at the end of the last two statements lets GroupAdmins manage membership to all
groups except the Administrators group (see The Administrators Group and Policy). You
should protect membership to that group because it has power to do anything throughout the
tenancy.
It might seem that the last two statements should also cover the basic listing functionality that
the first two statements enable. To better understand how conditions work and why you also
need the first two statements, see Variables that Aren't Applicable to a Request Result in a
Declined Request.
Where to create the policy: In the tenancy, because users and groups reside in the
tenancy.
Allow group GroupAdmins to inspect users in tenancy
The preceding policy allows IAD-Admins to manage all aspects of all resources in the Ashburn
(IAD) region. Assuming this tenancy's home region is Phoenix (PHX), then this policy does not
allow IAD-Admins to manage IAM resources.
Conditions
As part of a policy statement, you can specify one or more conditions that must be met in
order for access to be granted. For a simple example, see Let Group Admins Manage Group
Membership.
Each condition consists of one or more predefined variables that you specify values for in the
policy statement. Later, when someone requests access to the resource in question, if the
condition in the policy is met, it evaluates to true and the request is allowed. If the condition is
not met, it evaluates to false and the request is not allowed.
There are two types of variables: those that are relevant to the request itself, and those
relevant to the resource(s) being acted upon in the request, also known as the target. The
name of the variable is prefixed accordingly with either request or target followed by a
period. For example, there's a request variable called request.operation to represent the
API operation being requested. This variable lets you write a broad policy statement, but add
a condition based on the specific API operation. For an example, see Let Users Write Objects
to Object Storage Buckets.
For more information about the syntax of conditions, see Conditions. For a list of all the
variables you can use in policies, see the tables in the Policy Reference.
Permissions
Permissions are the atomic units of authorization that control a user's ability to perform
operations on resources. Oracle defines all the permissions in the policy language. When you
write a policy giving a group access to a particular verb and resource-type, you're actually
giving that group access to one or more predefined permissions. The purposes of verbs is to
simplify the process of granting multiple related permissions that cover a broad set of access
or a particular operational scenario. The next sections give more details and examples.
Relation to Verbs
To understand the relationship between permissions and verbs, let's look at an example. A
policy statement that allows a group to inspect volumes actually gives the group access to a
permission called VOLUME_INSPECT (permissions are always written with all capital letters
and underscores). In general, that permission enables the user to get information about block
volumes.
As you go from inspect > read > use > manage, the level of access generally increases, and
the permissions granted are cumulative. The following table shows the permissions included
with each verb for the volumes resource-type. Notice that no additional permissions are
granted going from inspect to read.
VOLUME_UPDATE VOLUME_UPDATE
VOLUME_WRITE VOLUME_WRITE
VOLUME_CREATE
VOLUME_DELETE
The policy reference lists the permissions covered by each verb for each given resource-type.
For example, for block volumes and other resources covered by the Core Services, see the
tables in Details for Verb + Resource-Type Combinations. The left column of each of those
tables lists the permissions covered by each verb. The other sections of the policy reference
include the same kind of information for the other services.
Each API operation requires the caller to have access to one or more permissions. For
example, to use either ListVolumes or GetVolume, you must have access to a single
permission: VOLUME_INSPECT. To attach a volume to an instance, you must have access to
multiple permissions, some of which are related to the volumes resource-type, some to the
volume-attachments resource-type, and some related to the instances resource-type:
l VOLUME_WRITE
l VOLUME_ATTACHMENT_CREATE
l INSTANCE_ATTACH_VOLUME
The policy reference lists which permissions are required for each API operation. For
example, for the Core Services API operations, see the table in Permissions Required for Each
API Operation.
The policy language is designed to let you write simple statements involving only verbs and
resource-types, without having to state the desired permissions in the statement. However,
there may be situations where a security team member or auditor wants to understand the
specific permissions a particular user has. The tables in the policy reference show each verb
and the associated permissions. You can look at the groups the user is in and the policies
applicable to those groups, and from there compile a list of the permissions granted.
However, having a list of the permissions isn't the complete picture. Conditions in a policy
statement can scope a user's access beyond individual permissions (see the next section).
Also, each policy statement specifies a particular compartment and can have conditions that
further scope the access to only certain resources in that compartment.
In a policy statement, you can use conditions combined with permissions or API operations to
reduce the scope of access granted by a particular verb.
For example, let's say you want group XYZ to be able to list, get, create, or update groups
(i.e., change their description), but not delete them. To list, get, create, and update groups,
you need a policy with manage groups as the verb and resource-type. According to the table
in Details for Verbs + Resource-Type Combinations, the permissions covered are:
l GROUP_INSPECT
l GROUP_UPDATE
l GROUP_CREATE
l GROUP_DELETE
To restrict access to only the desired permissions, you could add a condition that explicitly
states the permissions you want to allow:
Allow group XYZ to manage groups in tenancy
However, with this approach, be aware that any new permissions the service might add in the
future would automatically be granted to group XYZ. Only GROUP_DELETE would be omitted.
Another alternative would be to write a condition based on the specific API operations. Notice
that according to the table in Permissions Required for Each API Operation, both ListGroups
and GetGroup require only the GROUP_INSPECT permission. Here's the policy:
Allow group XYZ to manage groups in tenancy
It can be beneficial to use permissions instead of API operations in conditions. In the future, if
a new API operation is added that requires one of the permissions listed in the permissions-
based policy above, that policy will already control XYZ group's access to that new API
operation.
But notice that you can further scope a user's access to a permission by also specifying a
condition based on API operation. For example, you could give a user access to GROUP_
INSPECT, but then only to ListGroups.
Allow group XYZ to manage groups in tenancy
Policy Syntax
The overall syntax of a policy statement is as follows:
Allow <subject> to <verb> <resource-type> in <location> where <conditions>
For limits on the number of policies and statements, see Service Limits.
Subject
Specify one or more comma-separated groups by name or OCID. Or specify any-user to
cover all users in the tenancy.
Examples:
id ocid1.group.oc1..aaaaaaaaqjihfhvxmum...awuc7i5xwe6s7qmnsbc6a
l To specify multiple groups by OCID (the OCIDs are shortened for brevity):
Allow group
id ocid1.group.oc1..aaaaaaaaqjihfhvxmumrl...wuc7i5xwe6s7qmnsbc6a,
id ocid1.group.oc1..aaaaaaaavhea5mellwzb...66yfxvl462tdgx2oecyq
Verb
Specify a single verb. For a list of verbs, see Verbs. Example:
Allow group A-Admins to manage all-resources in compartment Project-A
Resource-Type
Specify a single resource-type, which can be one of the following:
A family resource-type covers a variety of components that are typically used together. This
makes it easier to write a policy that gives someone access to work with various aspects of
your cloud network.
Examples:
Location
Specify a single compartment by name or OCID. Or simply specify tenancy to cover the
entire tenancy. Remember that users, groups, and compartments reside in the tenancy.
Policies can reside in (i.e., be attached to) either the tenancy or a child compartment.
The location is optional in the statement. If you omit it, the statement applies to the
compartment (or tenancy) that the policy is attached to. For more information, see Policy
Attachment.
Examples:
Allow group id
ocd1.group.oc1..aaaaaaaavhea5mell...b4rm66yfxvl462tdgx2oecyq
to manage all-resources in compartment id
ocid1.compartment.oc1..aaaaaaaaphfjutov5s...vyypllbtctehnqg756a
Conditions
Specify one or more conditions. Use any or all with multiple conditions for a logical OR or
AND, respectively.
For a list of variables supported by all the services, see General Variables for All Requests.
Also see the details for each service in the Policy Reference. Here are the types of values you
can use in conditions:
Type Examples
String '[email protected]'
'ocid1.compartment.oc1..aaaaaaaaph...ctehnqg756a'
Examples:
l A single condition.
The following policy enables the GroupAdmins group to create, update, or delete any
groups with names that start with "A-Users-":
Allow group GroupAdmins to manage groups in tenancy where target.group.name = /A-Users-*/
The following policy enables the GroupAdmins group to manage the membership of any
group besides the Administrators group:
Allow group GroupAdmins to use users in tenancy where target.group.name != 'Administrators'
The following policy enables the NetworkAdmins group to manage cloud networks in any
compartment except the one specified:
Allow group NetworkAdmins to manage virtual-network-family in tenancy where target.compartment.id
!= 'ocid1.compartment.oc1..aaaaaaaayzfqeibduyox6icmdol6zyar3ugly4fmameq4h7lcdlihrvur7xq'
l Multiple conditions.
The following policy lets A-Admins create, update, or delete any groups whose names
start with "A-", except for the A-Admins group itself:
Allow group GroupAdmins to manage groups in tenancy where
all {target.group.name=/A-*/,target.group.name!='A-Admins'}
Note that in the above policies, the statements do not let GroupAdmins actually list all the
users and groups. To understand why not, see Variables that Aren't Applicable to a Request
Result in a Declined Request.
Policy Reference
This reference includes:
For instructions on how to create and manage policies using the Console or API, see Managing
Policies.
Verbs
The verbs are listed in order of least amount of ability to most. The exact meaning of a each
verb depends on which resource-type it's paired with. The tables later in this section show the
API operations covered by each combination of verb and resource-type.
use Includes read plus the ability to work with existing resources Day-to-day
(the actions vary by resource type). Includes the ability to end users of
update the resource, except for resource-types where the resources
"update" operation has the same effective impact as the
"create" operation (e.g., UpdatePolicy,
UpdateSecurityList, etc.), in which case the "update"
ability is available only with the manage verb. In general, this
verb does not include the ability to create or delete that type
of resource.
Resource-Types
The family resource-types are listed below. For the individual resource-types that make up
each family, follow the links.
IAM has no family resource-type, only individual ones. See Details for IAM.
target.tag.namespace.id Entity (OCID) The OCID of the tag namespace that the
resource is tagged with.
target.tag.definition.id Entity (OCID) The OCID of the tag definition that the
resource is tagged with.
Resource-Types
audit-events
Supported Variables
Only the general variables are supported (see General Variables for All Requests).
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the use and manage verbs for the audit-events resource-type cover no extra
permissions or API operations compared to the read verb.
AUDIT -EVENTS
INSPECT
READ
USE
MANAGE
The following table lists the API operations in a logical order, grouped by resource type.
ListAuditEvents AUDIT_EVENT_READ
Resource-Types
AGGREGATE RESOURCE-TYPE
virtual-network-family
INDIVIDUAL RESOURCE-TYPES
vcns
subnets
route-tables
security-lists
dhcp-options
private-ips
public-ips
internet-gateways
drgs
drg-attachments
cpes
ipsec-connections
cross-connects
cross-connect-groups
virtual-circuits
vnics
nic-attachments
COMMENTS
See the table in Details for Verb + Resource-Type Combinations for a detailed breakout of the
API operations covered by each verb, for each individual resource-type included in virtual-
network-family.
Compute
AGGREGATE RESOURCE-TYPE
instance-family
INDIVIDUAL RESOURCE-TYPES
console-histories
instance-console-connection
instance-images
instances
COMMENTS
A policy that uses <verb> instance-family is equivalent to writing one with a separate
<verb> <individual resource-type> statement for each of the individual resource-
types.
See the table in Details for Verb + Resource-Type Combinations for a detailed breakout of
the API operations covered by each verb, for each individual resource-type included in
virtual-network-family.
Block Volume
AGGREGATE RESOURCE-TYPE
volume-family
INDIVIDUAL RESOURCE-TYPES
volumes
volume-attachments
volume-backups
COMMENTS
A policy that uses <verb> volume-family is equivalent to writing one with a separate
<verb> <individual resource-type> statement for each of the individual resource-
types.
See the table in Details for Verb + Resource-Type Combinations for a detailed breakout of
the API operations covered by each verb, for each individual resource-type included in
volume-family.
Supported Variables
Only the general variables are supported (see General Variables for All Requests).
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the read and use verbs for the vcns resource-type cover no extra permissions
or API operations compared to the inspect verb. However, the manage verb includes several
extra permissions and API operations.
vcns
INSPECT
GetVcn
READ
USE
MANAGE
manage private-ips)
CreateInternetGateway,
manage internet-gateways)
CreateLocalPeeringGateway,
DeleteLocalPeeringGateway (also
gateways)
CreateSecurityList,
manage security-lists)
CreateDhcpOptions,
dhcp-options)
CreateDrgAttachment,
manage drgs)
network-family.
subnets
INSPECT
GetSubnet
READ
USE
instance-family)
SUBNET_ATTACH
a volume is attached)
instance-family)
instance-family)
CreatePrivateIp, DeletePrivateIp
use vnics)
MANAGE
network-family.
route-tables
INSPECT
GetRouteTable
READ
USE
MANAGE
ROUTE_TABLE_DETACH ips)
vcns)
manage dhcp-options)
virtual-network-family.
security-lists
INSPECT
GetSecurityList
READ
USE
MANAGE
SECURITY_LIST_DELETE
Note: All of the above operations in this
virtual-network-family.
dhcp-options
INSPECT
GetDhcpOptions
READ
USE
MANAGE
virtual-network-family.
private-ips
INSPECT
GetPrivateIp
ListPublicIps,
GetPublicIpByPrivateIpId,
GetPublicIpByIpAddress
READ
USE
virtual-network-family.
PRIVATE_IP_UNASSIGN_PUBLIC_IP
MANAGE
tables)
virtual-network-family.
public-ips
INSPECT
READ
ListPublicIps,
GetPublicIpByPrivateIpId,
GetPublicIpByIpAddress
private-ip permissions.
USE
CreatePrivateIp, DeletePrivateIp
PUBLIC_IP_ASSIGN_PRIVATE_IP
(both also need use private-ips)
PUBLIC_IP_UNASSIGN_PRIVATE_IP
Note: The above operations in this cell
virtual-network-family.
MANAGE
UpdatePrivateIp,
PUBLIC_IP_CREATE
CreatePrivateIp, DeletePrivateIp
ips)
virtual-network-family.
internet-gateways
INSPECT
GetInternetGateway
READ
USE
MANAGE
manage private-ips)
virtual-network-family.
local-peering-gateways
INSPECT
GetLocalPeeringGateway
READ
USE
MANAGE
DeleteLocalPeeringGateway (both
LOCAL_PEERING_GATEWAY_UPDATE UpdateLocalPeeringGateway
also need manage vcns)
LOCAL_PEERING_GATEWAY_ATTACH
UpdateRouteTable (also need manage
LOCAL_PEERING_GATEWAY_DETACH route-tables)
local-peering-from
INSPECT
READ
USE
MANAGE
VCN Peering.)
virtual-network-family.
local-peering-to
INSPECT
READ
USE
MANAGE
VCN Peering.)
virtual-network-family.
drgs
INSPECT
DRG_ATTACHMENT_READ GetDrg
ListDrgAttachments
READ
USE
MANAGE
DRG_DETACH UpdateDrg
CreateRouteTable (also need manage
ips)
CreateVirtualCircuit,
cross-connects)
virtual-network-family.
cpes
INSPECT
GetCpe
READ
USE
MANAGE
ipsec
INSPECT
GetIPSecConnection
GetIPSecConnectionStatus
READ
IPSEC_CONNECTION_DEVICE_CONFIG_ GetIPSecConnectionDeviceConfig
READ
USE
MANAGE
IPSEC_CONNECTION_UPDATE
Note: All of the above operations in this
virtual-network-family.
cross-connects
INSPECT
GetCrossConnect
READ
USE
MANAGE
virtual-circuits)
CROSS_CONNECT_UPDATE CreateCrossConnect
CreateVirtualCircuit,
CROSS_CONNECT_CREATE DeleteCrossConnect
DeleteVirtualCircuit (also need
CROSS_CONNECT_ATTACH
CROSS_CONNECT_DETACH
cross-connect-groups
INSPECT
GetCrossConnectGroup
READ
USE
MANAGE
CROSS_CONNECT_GROUP_UPDATE CreateCrossConnectGroup
CROSS_CONNECT_GROUP_CREATE DeleteCrossConnectGroup
CROSS_CONNECT_GROUP_DELETE
virtual-circuits
INSPECT
GetVirtualCircuit
READ
USE
MANAGE
VIRTUAL_CIRCUIT_CREATE CreateVirtualCircuit,
cross-connects)
virtual-network-family.
vnics
INSPECT
READ
USE
VNIC_DETACH
AttachVnic (also need manage
CreatePrivateIp, DeletePrivateIp
private-ips)
MANAGE
vnic-attachments
INSPECT
inspect instances)
READ
USE
MANAGE
The instance-family includes extra permissions beyond the sum of the permissions for the
individual resource-types included in instance-family. For example: It includes a few
permissions for vnics and volumes, even though those resource-types aren't generally
considered part of the instance-family. Why are there extras included? So you can write
fewer policy statements to cover general use cases, like working with an instance that has an
attached block volume. You can instead write a statement for instance-family instead of
multiple ones covering instances, vnics, and volumes.
l VNIC_READ
l VNIC_ATTACHMENT_READ
l VOLUME_ATTACHMENT_INSPECT
l VOLUME_ATTACHMENT_READ
l VNIC_ATTACH
l VNIC_DETACH
l VOLUME_ATTACHMENT_UPDATE
l VOLUME_ATTACHMENT_CREATE
l VOLUME_ATTACHMENT_DELETE
The following tables list the permissions and API operations covered by each of the individual
resource-types included in instance-family.
instances
INSPECT
inspect vnic-attachments)
volume-attachments)
volume-attachments)
READ
instance-images)
USE
instance-images)
INSTANCE_CREATE_IMAGE InstanceAction
INSTANCE_ATTACH_VOLUME volumes)
volumes)
MANAGE
INSTANCE_ATTACH_SECONDARY_VNIC
TerminateInstance (also need use
family)
family)
console-histories
INSPECT
inspect instances)
READ
images)
USE
MANAGE
instance-console-connection
INSPECT
read instances)
READ
INSTANCE_CONSOLE_CONNECTION_ GetInstanceConsoleConnection
USE
MANAGE
CREATE
INSTANCE_CONSOLE_CONNECTION_
DELETE
instance-images
INSPECT
GetImage
READ
subnets)
console-histories)
histories)
USE
INSTANCE_IMAGE_UPDATE
MANAGE
instances)
INSTANCE_IMAGE_DELETE
The following table lists the permissions and API operations covered by each of the individual
resource-types included in volume-family.
volumes
INSPECT
volume-backups)
manage volume-backups)
volume-attachments is required.
READ
USE
manage volume-backups)
MANAGE
volume-attachments
INSPECT
instances)
attachments.
READ
USE
VOLUME_ATTACHMENT_UPDATE
MANAGE
volume-backups
INSPECT
inspect volumes)
READ
volumes)
USE
inspect volumes)
MANAGE
volumes)
VOLUME_BACKUP_DELETE
inspect volumes)
The following table lists the Core Services API operations grouped by resource type, which are
listed in alphabetical order.
DeleteConsoleHistory CONSOLE_HISTORY_DELETE
ListCpes CPE_READ
GetCpe CPE_READ
UpdateCpe CPE_UPDATE
CreateCpe CPE_CREATE
DeleteCpe CPE_DELETE
ListCrossConnects CROSS_CONNECT_READ
GetCrossConnect CROSS_CONNECT_READ
UpdateCrossConnect CROSS_CONNECT_UPDATE
ListCrossConnectGroups CROSS_CONNECT_GROUP_READ
GetCrossConnectGroup CROSS_CONNECT_GROUP_READ
UpdateCrossConnectGroup CROSS_CONNECT_GROUP_UPDATE
CreateCrossConnectGroup CROSS_CONNECT_GROUP_CREATE
DeleteCrossConnectGroup CROSS_CONNECT_GROUP_DELETE
ListDhcpOptions DHCP_READ
GetDhcpOptions DHCP_READ
UpdateDhcpOptions DHCP_UPDATE
ListDrgs DRG_READ
GetDrg DRG_READ
UpdateDrg DRG_UPDATE
CreateDrg DRG_CREATE
DeleteDrg DRG_DELETE
ListDrgAttachments DRG_ATTACHMENT_READ
GetDrgAttachment DRG_ATTACHMENT_READ
UpdateDrgAttachment DRG_ATTACHMENT_UPDATE
DeleteInstanceConsoleConnection INSTANCE_CONSOLE_CONNECTION_DELETE
ListImages INSTANCE_IMAGE_READ
GetImage INSTANCE_IMAGE_READ
UpdateImage INSTANCE_IMAGE_UPDATE
DeleteImage INSTANCE_IMAGE_DELETE
ListInstances INSTANCE_GET
GetInstance INSTANCE_GET
UpdateInstance INSTANCE_UPDATE
InstanceAction INSTANCE_POWER_ACTIONS
ListInternetGateways INTERNET_GATEWAY_READ
GetInternetGateway INTERNET_GATEWAY_READ
UpdateInternetGateway INTERNET_GATEWAY_UPDATE
ListIPSecConnections IPSEC_CONNECTION_READ
GetIPSecConnection IPSEC_CONNECTION_READ
UpdateIpSecConnection IPSEC_CONNECTION_UPDATE
GetIPSecConnectionDeviceConfig IPSEC_CONNECTION_CONFIDENTIAL_READ
GetIPSecConnectionDeviceStatus IPSEC_CONNECTION_READ
ListLocalPeeringGateways LOCAL_PEERING_GATEWAY_READ
GetLocalPeeringGateway LOCAL_PEERING_GATEWAY_READ
UpdateLocalPeeringGateway LOCAL_PEERING_GATEWAY_UPDATE
LOCAL_PEERING_GATEWAY_CONNECT_TO
ListPrivateIps PRIVATE_IP_READ
GetPrivateIp PRIVATE_IP_READ
UpdatePrivateIp PRIVATE_IP_UPDATE
ListRouteTables ROUTE_TABLE_READ
GetRouteTable ROUTE_TABLE_READ
ListSecurityLists SECURITY_LIST_READ
GetSecurityList SECURITY_LIST_READ
UpdateSecurityList SECURITY_LIST_UPDATE
ListShapes MACHINE_SHAPE_READ
ListSubnets SUBNET_READ
GetSubnet SUBNET_READ
UpdateSubnet SUBNET_UPDATE
ListVcns VCN_READ
GetVcn VCN_READ
UpdateVcn VCN_UPDATE
CreateVcn VCN_CREATE
DeleteVcn VCN_DELETE
ListVirtualCircuits VIRTUAL_CIRCUIT_READ
GetVirtualCircuit VIRTUAL_CIRCUIT_READ
GetVnic VNIC_READ
UpdateVnic VNIC_UPDATE
GetVnicAttachment VNIC_ATTACHMENT_READ
ListVolumes VOLUME_INSPECT
GetVolume VOLUME_INSPECT
UpdateVolume VOLUME_UPDATE
DeleteVolume VOLUME_DELETE
Resource-Types
db-systems
db-nodes
db-homes
databases
backups
Supported Variables
Only the general variables are supported (see General Variables for All Requests).
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the read and use verbs for the db-systems resource-type cover no extra
permissions or API operations compared to the inspect verb. However, the manage verb
includes two more permissions and partially covers two more API operations.
db-systems
INSPECT
GetDbSystem
ListDbSystemPatches
ListDbSystemPatchHistoryEntries
GetDbSystemPatch
GetDbSystemPatchHistoryEntry
READ
USE
MANAGE
subnets)
DB_SYSTEM_DELETE
db-nodes
INSPECT
READ
USE
MANAGE
DB_NODE_POWER_ACTIONS DbNodeAction
db-homes
INSPECT
GetDBHome
ListDbHomePatches
ListDbHomePatchHistoryEntries
GetDbHomePatch
GetDbHomePatchHistoryEntry
READ
USE
MANAGE
subnets)
DB_HOME_DELETE
databases
INSPECT
GetDatabase
ListDataGuardAssociations
GetDataGuardAssociation
READ
USE
MANAGE
subnets)
The following table lists the API operations in a logical order, grouped by resource type.
ListDbSystems DB_SYSTEM_INSPECT
GetDbSystem DB_SYSTEM_INSPECT
ListDbSystemPatches DB_SYSTEM_INSPECT
ListDbSystemPatchHistoryEntries DB_SYSTEM_INSPECT
GetDbSystemPatch DB_SYSTEM_INSPECT
GetDbSystemPatchHistoryEntry DB_SYSTEM_INSPECT
GetDbNode DB_NODE_INSPECT
DbNodeAction DB_NODE_POWER_ACTIONS
ListDbHomes DB_HOME_INSPECT
GetDbHome DB_HOME_INSPECT
ListDbHomePatches DB_HOME_INSPECT
ListDbHomePatchHistoryEntries DB_HOME_INSPECT
GetDbHomePatch DB_HOME_INSPECT
GetDbHomePatchHistoryEntry DB_HOME_INSPECT
UpdateDbHome DB_HOME_UPDATE
ListDatabases DATABASE_INSPECT
GetDatabase DATABASE_INSPECT
UpdateDatabase DATABASE_UPDATE
GetDataGuardAssociation DATABASE_INSPECT
ListDataGuardAssociations DATABASE_INSPECT
SwitchoverDataGuardAssociation DATABASE_UPDATE
FailoverDataGuardAssociation DATABASE_UPDATE
ReinstateDataGuardAssociation DATABASE_UPDATE
GetBackup DB_BACKUP_INSPECT
ListBackups DB_BACKUP_INSPECT
Aggregate Resource-Type
dns
Individual Resource-Types
dns-zones
dns-records
dns-traffic
COMMENTS
A policy that uses <verb> dns is equivalent to writing one with a separate <verb>
<individual resource-type> statement for each of the individual resource-types.
See the table in Details for Verb + Resource-Type Combinations for a detailed breakout of the
API operations covered by each verb, for each individual resource-type included in dns.
Supported Variables
The DNS Service supports all the general variables (see General Variables for All Requests),
plus the ones listed here.
target.dns- Entity Use this variable to control access to specific DNS zones
zone.id (OCID) by OCID.
target.dns- String Use this variable to control access to specific DNS zones
zone.name by name.
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the use and manage verbs for the dns-traffic resource-type cover no extra
permissions or API operations compared to the read verb.
dns-zones
INSPECT
READ
DNS_ZONE_READ
USE
DNS_ZONE_UPDATE PatchZoneRecords
MANAGE
DNS_ZONE_CREATE DeleteZone
DNS_ZONE_DELETE
dns-records
INSPECT
READ
DNS_RECORD_READ GetRRSet
USE
DeleteRRSet
PatchRRSet
UpdateRRSet
MANAGE
DNS_RECORD_CREATE
DNS_RECORD_DELETE
dns-traffic
INSPECT
READ
DNS_TRAFFIC_READ
USE
MANAGE
The following table lists the API operations in a logical order, grouped by resource type.
ListZones DNS_ZONE_INSPECT
CreateZone DNS_ZONE_CREATE
DeleteZone DNS_ZONE_DELETE
GetZone DNS_ZONE_READ
UpdateZone DNS_ZONE_UPDATE
GetDomainRecords DNS_RECORD_READ
PatchDomainRecords DNS_RECORD_UPDATE
UpdateDomainRecords DNS_RECORD_UPDATE
DeleteRRSet DNS_RECORD_UPDATE
GetRRSet DNS_RECORD_READ
PatchRRSet DNS_RECORD_UPDATE
UpdateRRSet DNS_RECORD_UPDATE
GetDNSTrafficCounts DNS_TRAFFIC_READ
Resource-Types
l file-systems
l mount-targets
l export-sets
Supported Variables
Only the general variables are supported (see General Variables for All Requests).
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access..
For example, the read verb for the file-systems resource-type includes the same
permissions and API operations as the inspect verb, plus the FILE_SYSTEM_READ permission
and a number of API operations (e.g., GetFileSystem, ListMountTargets, etc.). The use
verb covers still another permission and set of API operations compared to read. Lastly,
manage covers two more permissions and operations compared to use.
export-sets
INSPECT
READ
EXPORT_SET_READ GetExport
GetExportSet
ListExports
USE
EXPORT_SET_UPDATE UpdateExportSet
MANAGE
EXPORT_SET_CREATE CreateExportSet
EXPORT_SET_DELETE DeleteExportSet
NFSv3_EXPORT
DeleteExport
EXPORT_SET_UPDATE + FILE_SYSTEM_
NFSv3_EXPORT
file-systems
INSPECT
READ
FILE_SYSTEM_READ GetFileSystem
USE
FILE_SYSTEM_UPDATE UpdateFileSystem
MANAGE
FILE_SYSTEM_CREATE CreateFileSystem
FILE_SYSTEM_DELETE DeleteF
mount-targets
INSPECT
READ
MOUNT_TARGET_READ GetMountTarget
USE
MOUNT_TARGET_UPDATE UpdateMountTarget
MANAGE
CREATE(vnicCompartment) + SUBNET_
DeleteMountTarget
ATTACH
(subnetCompartment) + PRIVATE_
DNS_ZONE_ATTACH
(privateDnsZoneCompartment)
MOUNT_TARGET_DELETE + VNIC_
DELETE(vnicCompartment) + SUBNET_
DETACH(subnetCompartment)
+ PRIVATE_DNS_ZONE_ATTACH
(privateDnsZoneCompartment)
The following table lists the API operations in a logical order, grouped by resource type.
ListExports EXPORT_SET_READ
GetExport EXPORT_SET_READ
ListExportSets EXPORT_SET_INSPECT
CreateExportSet EXPORT_SET_CREATE
GetExportSet EXPORT_SET_READ
UpdateExportSet EXPORT_SET_UPDATE
DeleteExportSet EXPORT_SET_DELETE
ListFileSystems FILE_SYSTEM_INSPECT
CreateFileSystem FILE_SYSTEM_CREATE
GetFileSystem FILE_SYSTEM_READ
UpdateFileSystem FILE_SYSTEM_UPDATE
DeleteFileSystem FILE_SYSTEM_DELETE
ListMountTargets MOUNT_TARGET_INSPECT
GetMountTarget MOUNT_TARGET_READ
UpdateMountTarget MOUNT_TARGET_UPDATE
DeleteMountTarget MOUNT_TARGET_DELETE
Resource-Types
compartments
users
groups
dynamic-groups
policies
identity-providers
tenancy
tag-namespaces
tag-definitions
Supported Variables
IAM supports all the general variables (see General Variables for All Requests), plus
additional ones listed here:
target String
.user.name
target String
.group.name
target String
.
policy.name
target Boolean Whether the policy being acted upon uses "Keep
. policy current" as its version date (i.e., either
policy null or an empty string for the versionDate
.autoupdate parameter in CreatePolicy and UpdatePolicy).
target. String
compartment
.name
target.tag- String
namespace
.name
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the read verb for compartments covers no extra permissions or API operations
compared to the inspect verb. The use verb includes the same ones as the read verb, plus
the COMPARTMENT_UPDATE permission and UpdateCompartment API operation. The manage
verb includes the same permissions and API operations as the use verb, plus the
compartments
INSPECT
GetCompartment
READ
USE
COMPARTMENT_UPDATE UpdateCompartment
MANAGE
COMPARTMENT_CREATE CreateCompartment
DeleteCompartment
users
INSPECT
inspect groups)
GetUser
READ
USER_READ ListApiKeys
ListSwiftPasswords
ListCustomerSecretKeys
USE
groups)
groups)
MANAGE
USER_CREATE UpdateUserState
USER_DELETE CreateUser
USER_UNBLOCK DeleteUser
USER_APIKEY_ADD UploadApiKey
USER_APIKEY_REMOVE DeleteApiKey
USER_UIPASS_SET CreateOrResetUIPassword
USER_UIPASS_RESET UpdateSwiftPassword
USER_SWIFTPASS_SET CreateSwiftPassword
USER_SWIFTPASS_RESET DeleteSwiftPassword
USER_SWIFTPASS_REMOVE CreateSecretKey
USER_SECRETKEY_ADD UpdateCustomerSecretKey
USER_SECRETKEY_UPDATE DeleteCustomerSecretKey
USER_SECRETKEY_REMOVE
groups
INSPECT
inspect users)
GetGroup
ListIdpGroupMappings,
inspect identity-providers)
READ
USE
users)
users)
AddIdpGroupMapping,
MANAGE
GROUP_CREATE CreateGroup
GROUP_DELETE DeleteGroup
dynamic-groups
INSPECT
GetDynamicGroup
READ
USE
DYNAMIC_GROUP_UPDATE UpdateDynamicGroup
MANAGE
DYNAMIC_GROUP_CREATE CreateDynamicGroup
DYNAMIC_GROUP_DELETE DeleteDynamicGroup
policies
INSPECT
GetPolicy
READ
USE
MANAGE
POLICY_UPDATE UpdatePolicy
POLICY_CREATE CreatePolicy
POLICY_DELETE DeletePolicy
identity-providers
INSPECT
READ
USE
MANAGE
IDENTITY_PROVIDER_DELETE DeleteIdentityProvider
tenancy
INSPECT
GetTenancy
ListRegions
READ
USE
TENANCY_UPDATE
MANAGE
TENANCY_UPDATE CreateRegionSubscription
tag-namespaces
INSPECT
GetTagNamespace
READ
USE
TAG_NAMESPACE_UPDATE UpdateTagNamespace
MANAGE
TAG_NAMESPACE_CREATE CreateTagNamespace
tag-definitions
INSPECT
GetTagDefinition
READ
USE
TAG_DEFINITION_UPDATE UpdateTagDefinition
MANAGE
TAG_DEFINITION_CREATE CreateTagDefinition
The following table lists the API operations in a logical order, grouped by resource type.
ListRegions TENANCY_INSPECT
ListRegionSubscriptions TENANCY_INSPECT
CreateRegionSubscription TENANCY_UPDATE
GetTenancy TENANCY_INSPECT
ListAvailabilityDomains COMPARTMENT_INSPECT
ListCompartments COMPARTMENT_INSPECT
GetCompartment COMPARTMENT_INSPECT
UpdateCompartment COMPARTMENT_UPDATE
CreateCompartment COMPARTMENT_CREATE
ListUsers USER_INSPECT
GetUser USER_INSPECT
UpdateUser USER_UPDATE
CreateUser USER_CREATE
DeleteUser USER_DELETE
ListApiKeys USER_READ
ListSwiftPasswords USER_READ
ListCustomerSecretKeys USER_READ
ListGroups GROUP_INSPECT
GetGroup GROUP_INSPECT
UpdateGroup GROUP_UPDATE
CreateGroup GROUP_CREATE
DeleteGroup GROUP_DELETE
ListDynamicGroups DYNAMIC_GROUP_INSPECT
GetDynamicGroup DYNAMIC_GROUP_INSPECT
UpdateDynamicGroup DYNAMIC_GROUP_UPDATE
CreateDynamicGroup DYNAMIC_GROUP_CREATE
DeleteDynamicGroup DYNAMIC_GROUP_DELETE
ListPolicies POLICY_READ
GetPolicy POLICY_READ
UpdatePolicy POLICY_UPDATE
CreatePolicy POLICY_CREATE
DeletePolicy POLICY_DELETE
ListIdentityProviders IDENTITY_PROVIDER_INSPECT
GetIdentityProvider IDENTITY_PROVIDER_INSPECT
UpdateIdentityProvider IDENTITY_PROVIDER_UPDATE
CreateIdentityProvider IDENTITY_PROVIDER_CREATE
DeleteIdentityProvider IDENTITY_PROVIDER_DELETE
ListTagNamespaces TAG_NAMESPACE_INSPECT
GetTagNamespace TAG_NAMESPACE_INSPECT
CreateTagNamespace TAG_NAMESPACE_CREATE
UpdateTagNamespace TAG_NAMESPACE_UPDATE
ListTagDefinitions TAG_DEFINITION_INSPECT
GetTagDefinition TAG_DEFINITION_INSPECT
CreateTagDefinition TAG_DEFINITION_CREATE
UpdateTagDefinition TAG_DEFINITION_UPDATE
Resource-Types
load-balancers
Supported Variables
Only the general variables are supported (see General Variables for All Requests).
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access..
For example, the read verb for load-balancers includes the same permissions and API
operations as the inspect verb, plus the LOAD_BALANCER_READ permission and a number of
API operations (e.g., GetLoadBalancer, ListWorkRequests, etc.). The use verb covers still
another permission and set of API operations compared to read. And manage covers two more
permissions and operations compared to use.
LOAD-BALANCERS
INSPECT
ListShapes
ListPolicies
ListProtocols
READ
LOAD_BALANCER_READ GetLoadBalancer
ListWorkRequests
GetWorkRequest
ListBackendSets
GetBackendSet
ListBackends
GetBackend
GetHealthChecker
ListCertificates
USE
LOAD_BALANCER_UPDATE UpdateLoadBalancer
UpdateBackendSet
CreateBackendSet
DeleteBackendSet
UpdateBackend
CreateBackend
DeleteBackend
UpdateHealthChecker
CreateCertificate
DeleteCertificate
UpdateListener
CreateListener
DeleteListener
MANAGE
LOAD_BALANCER_CREATE CreateLoadBalancer
LOAD_BALANCER_DELETE DeleteLoadBalancer
The following table lists the API operations in a logical order, grouped by resource type.
ListLoadBalancers LOAD_BALANCER_INSPECT
GetLoadBalancer LOAD_BALANCER_READ
UpdateLoadBalancer LOAD_BALANCER_UPDATE
CreateLoadBalancer LOAD_BALANCER_CREATE
DeleteLoadBalancer LOAD_BALANCER_DELETE
ListShapes LOAD_BALANCER_INSPECT
ListWorkRequests LOAD_BALANCER_READ
GetWorkRequest LOAD_BALANCER_READ
ListBackendSets LOAD_BALANCER_READ
GetBackendSet LOAD_BALANCER_READ
UpdateBackendSet LOAD_BALANCER_UPDATE
CreateBackendSet LOAD_BALANCER_UPDATE
DeleteBackendSet LOAD_BALANCER_UPDATE
ListBackends LOAD_BALANCER_READ
GetBackend LOAD_BALANCER_READ
UpdateBackend LOAD_BALANCER_UPDATE
CreateBackend LOAD_BALANCER_UPDATE
DeleteBackend LOAD_BALANCER_UPDATE
GetHealthChecker LOAD_BALANCER_READ
UpdateHealthChecker LOAD_BALANCER_UPDATE
ListCertificates LOAD_BALANCER_READ
CreateCertificate LOAD_BALANCER_UPDATE
DeleteCertificate LOAD_BALANCER_UPDATE
UpdateListener LOAD_BALANCER_UPDATE
CreateListener LOAD_BALANCER_UPDATE
DeleteListener LOAD_BALANCER_UPDATE
ListPolicies LOAD_BALANCER_INSPECT
ListProtocols LOAD_BALANCER_INSPECT
Resource-Types
buckets
objects
Supported Variables
Object Storage supports all the general variables (see General Variables for All Requests),
plus the one listed here:
The following tables show the permissions and API operations covered by each verb. The level
of access is cumulative as you go from inspect > read > use > manage. A plus sign (+) in a
table cell indicates incremental access compared to the cell directly above it, whereas "no
extra" indicates no incremental access.
For example, the read verb for buckets includes the same permissions and API operations as
the inspect verb, plus the BUCKET_READ permission and GetBucket API operation. The use
verb covers still another permission and API operation compared to read. And manage covers
two more permissions and operations compared to use.
buckets
INSPECT
ListBuckets
READ
BUCKET_READ GetBucket
ListMultipartUploads
USE
BUCKET_UPDATE UpdateBucket
MANAGE
BUCKET_CREATE CreateBucket
BUCKET_DELETE DeleteBucket
PAR_MANAGE CreatePar
GetPar
ListPar
DeletePar
objects
INSPECT
ListObjects
ListMultipartUploadParts
READ
OBJECT_READ GetObject
USE
UploadPart, CommitMultipartUpload
MANAGE
OBJECT_CREATE CreateObject
OBJECT_DELETE DeleteObject
CreateMultipartUpload
UploadPart
CommitMultipartUpload
AbortMultipartUpload
The following table lists the API operations in a logical order, grouped by resource type.
CreateBucket BUCKET_CREATE
UpdateBucket BUCKET_UPDATE
GetBucket BUCKET_READ
HeadBucket BUCKET_INSPECT
ListBuckets BUCKET_INSPECT
DeleteBucket BUCKET_DELETE
CreateObject OBJECT_CREATE
OverwriteObject OBJECT_OVERWRITE
GetObject OBJECT_READ
DeleteObject OBJECT_DELETE
ListObjects OBJECT_INSPECT
ListMultipartUploadParts OBJECT_INSPECT
ListMultipartUploads BUCKET_READ
AbortMultipartUpload OBJECT_DELETE
CreatePar PAR_MANAGE
GetPar PAR_MANAGE
ListPar PAR_MANAGE
DeletePar PAR_MANAGE
User Credentials
There are three types of credentials that you manage with Oracle Cloud Infrastructure
Identity and Access Management (IAM):
l Console password: For signing in to the Console, the user interface for interacting
with Oracle Cloud Infrastructure
l API signing key (in PEM format): For sending API requests, which require
authentication
l Swift password: For using a Swift client with Recovery Manager (RMAN) to back up an
Oracle Database System (DB System) database to Object Storage
User Password
The administrator who creates a new user in IAM also needs to generate a one-time Console
password for the user (see To create or reset a user's Console password). The administrator
needs to securely deliver the password to the user by providing it verbally, printing it out, or
sending it through a secure email service.
When the user signs in to the Console the first time, they'll be immediately prompted to
change the password. If the user waits more than 7 days to initially sign in and change the
password, it will expire and an administrator will need to create a new one-time password for
the user.
Once the user successfully signs in to the Console, they can use Oracle Cloud Infrastructure
resources according to permissions they've been granted through policies.
Changing a Password
If a user wants to change their own password sometime after they change their initial one-
time password, they can do it in the Console. Remember that a user can automatically change
their own password; an administrator does not need to create a policy to give the user that
ability.
If a user forgets their Console password and also has no access to the API, they need to ask
an administrator to reset their password for them. All administrators (and anyone else who
has permission to the tenancy) can reset Console passwords. The process of resetting the
password generates a new one-time password that the administrator needs to deliver to the
user. The user will need to change their password the next time they sign in to the Console.
If you're an administrator who needs to reset a user's Console password, see To create or
reset a user's Console password.
If a user tries 10 times in a row to sign in to the Console unsuccessfully, they will be
automatically blocked from further attempts. They'll need to contact an administrator to get
unblocked (see To unblock a user).
If you have a non-human system that needs to make API requests, an administrator needs to
create a user for that system and then upload a public key to the IAM service for the system.
There's no need to generate a Console password for the user.
Swift Passwords
Swift is the OpenStack object store service. If you already have an existing Swift client, you
can use it with the Recovery Manager (RMAN) to back up an Oracle Database System (DB
System) database to Object Storage. You will need to get a Swift-specific password to do so.
For more information, see Working with Swift Passwords.
Overview
Enterprise companies commonly use an identity provider (IdP) to manage user
login/passwords and to authenticate users for access to secure websites, services, and
resources.
When someone in your company wants to use Oracle Cloud Infrastructure resources in the
Console, they must sign in with a user login and password. Your administrators can federate
with a supported IdP so that each employee can use an existing login and password and not
have to create a new set to use Oracle Cloud Infrastructure resources.
When working with your IdP, your administrator defines groups and assigns each user to one
or more groups according to the type of access the user needs. Oracle Cloud Infrastructure
also uses the concept of groups (in conjunction with IAM policies) to define the type of access
a user has. As part of setting up the relationship with the IdP, your administrator can map
each IdP group to a similarly defined IAM group, so that your company can re-use the IdP
group definitions when authorizing user access to Oracle Cloud Infrastructure resources.
Here's a screenshot from that process:
For information about the number of federations and group mappings you can have, see
Service Limits. There's no limit on the number of federated users.
Note: Any users who are in more than 50 IdP groups cannot
be authenticated to use the Oracle Cloud Infrastructure
Console.
General Concepts
Here's a list of the basic concepts you need to be familiar with.
IDP
IdP is short for identity provider, which is a service that provides identifying credentials
and authentication for users. Oracle Cloud Infrastructure supports identity federation with
Oracle Identity Cloud Service.
FEDERATION TRUST
A relationship that an administrator configures between an IdP and SP. You can use the
Oracle Cloud Infrastructure Console or API to set up that relationship. Then, the specific
IdP is "federated" to that SP. In the Console and API, the process of federating is thought
of as adding an identity provider to the tenancy.
METADATA URL
An IdP-provided URL that enables an SP to get required information to federate with that
IdP. Oracle Cloud Infrastructure supports the SAML 2.0 protocol, which is an XML-based
standard for sharing required information between the IdP and SP. The metadata URL
points to the SAML metadata document the SP needs.
FEDERATED USER
Someone who signs in to use the Oracle Cloud Infrastructure Console by way of a
federated IdP.
GROUP MAPPING
A mapping between an IdP group and an Oracle Cloud Infrastructure group, used for the
purposes of user authorization.
They'll be prompted to enter their Oracle Cloud Infrastructure tenant (e.g., ABCCorp).
They then see a page with two sets of sign-in instructions: one for federated users and one for
non-federated (Oracle Cloud Infrastructure) users. See the following screenshot.
The tenant is shown on the left. Directly below is the sign-in area for federated users. On the
right is the sign-in area for non-federated users.
Federated users choose which identity provider to use for sign-in, and then they're redirected
to that identity provider's sign-in experience for authentication. After entering their login and
password, they are authenticated by the IdP and redirected back to the Oracle Cloud
Infrastructure Console.
Unlike Oracle Cloud Infrastructure users, federated users cannot access the "User Settings"
page in the Console. This page is where a user can change or reset their Console password
and manage other Oracle Cloud Infrastructure credentials such as API signing keys and Swift
passwords.
Here's a more limited policy that restricts access to only the resources related to identity
providers and group mappings:
Allow group IdPAdmins to manage identity-providers in tenancy
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for groups or other IAM components, see Details for IAM.
For instructions for federating with Microsoft Active Directory, see Federating with Microsoft
Active Directory.
Your organization can have multiple Oracle Identity Cloud Service accounts (e.g., one for each
division of the organization). You can federate multiple Identity Cloud Service accounts with
Oracle Cloud Infrastructure, but each federation trust that you set up must be for a single
Identity Cloud Service account.
For each trust, you must set up a web application in Oracle Identity Cloud Service (also called
a trusted application); instructions are in Instructions for Federating. The resulting application
has a set of client credentials (a client ID and client secret). When you federate your Identity
Cloud Service account with Oracle Cloud Infrastructure, you must provide these credentials. If
you need to later update the group mappings, you'll have to provide the credentials again.
REQUIRED URLS
The easiest way to federate with Oracle Identity Cloud Service is through the Oracle Cloud
Infrastructure Console, although you could do it programmatically with the API. If you're
using the Console, you're asked to provide a base URL instead of the metadata URL. The base
URL is the left-most part of the URL in the browser window when you're signed in to the
Identity Cloud Service console:
If you're using the API to federate, you need to provide the metadata URL, which is the base
URL with /fed/v1/metadata appended, like so:
The metadata URL links directly to the IdP-provided XML required to federate. If you're using
the API, you need to provide both the metadata URL and the metadata itself when federating.
For more information, see Managing Identity Providers in the API.
BMCS-SAML-APP
When you federate an Oracle Identity Cloud Service account with Oracle Cloud Infrastructure,
a new SAML application called BMCS-SAML-App is automatically created in that Oracle
Identity Cloud Service account (see the following screenshot). If you later need to delete the
Oracle Identity Cloud Service identity provider from your Oracle Cloud Infrastructure tenancy,
make sure to also delete the BMCS-SAML-App from Oracle Identity Cloud Service. If you
don't, and you later try to federate the same Oracle Identity Cloud Service account again,
you'll get a 409 error saying that an application with the same name already exists (i.e.,
BMCS-SAML-App).
Following is the general process an administrator goes through to set up the identity provider,
and below are instructions for each step. It's assumed that the administrator is an Oracle
Cloud Infrastructure user with the required credentials and access.
a. Add the identity provider to your tenancy and provide information you got from
the IdP.
b. Map the IdP's groups to IAM groups.
3. Make sure you have IAM policies set up for the groups so you can control users' access
to Oracle Cloud Infrastructure resources.
4. Inform your users of the name of your Oracle Cloud Infrastructure tenant and the URL
for the Console (for example, https://console.us-ashburn-1.oraclecloud.com).
1. Go to the Oracle Identity Cloud Service console and sign in to the account you want to
federate. Make sure you're viewing the Admin Console.
2. Add a web (or trusted) application, which enables secure, programmatic interaction
between Oracle Cloud Infrastructure and Oracle Identity Cloud Service. Specify these
items when setting up the application:
a. On the first page:
i. Enter an application a name (e.g., Oracle Cloud Infrastructure Federation).
ii. Leave other fields empty or unselected.
b. On the next page:
i. Select Configure this application as a client now.
ii. For the Allowed Grant Types, select the check box for Client
Credentials.
iii. Leave other fields empty.
iv. At the bottom of the page, select the check box for Grant the client
access to Identity Cloud Service Admin APIs, and then select
Identity Domain Administrator from the list of roles.
c. On the next page, leave any fields empty or unselected and continue until you
click Finish.
d. Copy and paste the displayed client credentials so you can later give them to
Oracle Cloud Infrastructure when federating. You can view the application's client
credentials any time in the Oracle Identity Cloud Service console. They look
similar to this:
l Client ID: de06b81cb45a45a8acdcde923402a9389d8
l Client secret: 8a297afd-66df-49ee-c67d-39fcdf3d1c31
3. Record the Oracle Identity Cloud Service base URL, which you'll need when federating.
See About Federating with Oracle Identity Cloud Service.
4. Activate the application.
1. Go to the Console and sign in with your Oracle Cloud Infrastructure login and password.
2. Click Identity, and then click Federation.
3. Click Add identity provider.
4. Enter the following:
a. Name: A unique name for this federation trust (e.g., ABCCorp_IDCS in the
screenshot in Experience for Federated Users). This is the name federated users
see when choosing which identity provider to use when signing in to the Console.
The name must be unique across all identity providers you add to the tenancy.
c. Repeat the above sub-steps for each mapping you want to create, and then click
Create.
The identity provider is now added to your tenancy and appears in the list on the Federation
page. Click the identity provider to view its details and the group mappings you just set up.
Oracle assigns the identity provider and each group mapping a unique ID called an Oracle
Cloud ID (OCID). For more information, see Resource Identifiers.
In the future, come to the Federation page if you want to edit the group mappings or delete
the identity provider from your tenancy.
Step 4: Give your federated users the name of the tenant and URL to sign in
The federated users need the URL for the Oracle Cloud Infrastructure Console (for example,
https://console.us-ashburn-1.oraclecloud.com) and the name of your tenant. They'll be
prompted to provide the tenant name when they sign in to the Console.
Name. The new group will be automatically created in IAM and mapped to the
Oracle Identity Cloud Service group. It will also automatically be given this
description, which you can't change: "Group created during federation".
d. Repeat the above sub-steps for each mapping you want to create, and then click
Submit.
Your changes take effect typically within seconds in your home region. Wait several more
minutes for changes to propagate to all regions
Your changes take effect typically within seconds in your home region. Wait several more
minutes for changes to propagate to all regions
Your changes take effect typically within seconds in your home region. Wait several more
minutes for changes to propagate to all regions.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
Identity providers:
l CreateIdentityProvider
l ListIdentityProviders
l GetIdentityProvider
l UpdateIdentityProvider
l DeleteIdentityProvider: Before you can use this operation, you must first use
DeleteIdpGroupMapping to remove all the group mappings for the identity provider.
Group mappings:
l CreateIdpGroupMapping: Each group mapping is a separate entity with its own OCID.
l ListIdpGroupMappings
l GetIdpGroupMapping
l UpdateIdpGroupMapping
l DeleteIdpGroupMapping
Adding Groups and Users for Tenancies Federated with Oracle Identity
Cloud Service
This topic describes how to add groups and users for Oracle Cloud Infrastructure through the
Oracle Identity Cloud Service.
Although you create and manage the users and groups in Oracle Identity Cloud Service, the
access permissions for the groups are managed in Oracle Cloud Infrastructure. Each group
you create in Oracle Identity Cloud Service must be mapped to a group in Oracle Cloud
Infrastructure. The group in Oracle Cloud Infrastructure defines the access to resources.
Before you set up any new groups in Oracle Identity Cloud Service, ensure that you
understand how to assign permissions to groups in Oracle Cloud Infrastructure. See Overview
of IAM.
When you sign up for Oracle Cloud Infrastructure, a group with administrator privileges is
automatically set up for you In Oracle Identity Cloud Service. This group is called OCI_
Administrators and is mapped to the Oracle Cloud Infrastructure Administrators group.To add
users with administrator privileges in Oracle Cloud Infrastructure, you can add your new
users to this group by following the Adding Users procedure.
8. Click Assign.
9. On the Assign Applications dialog, select BMCS-SAML-App-<your tenancy name> to
assign the Oracle Cloud Infrastructure SAML application for your Oracle Cloud
Infrastructure tenancy to the group.
10. Click OK.
Adding Users
Mapping Groups
The groups you create in Oracle Identity Cloud Service get access through groups you define
in Oracle Cloud Infrastructure. Before your Oracle Identity Cloud Service groups can get
access, you must create groups in Oracle Cloud Infrastructure with the desired permissions
and then map your Oracle Identity Cloud Service groups to these. You can add permissions to
the Oracle Cloud Infrastructure groups before or after you complete the mapping.
Before you start this procedure, ensure that you have your client ID and client secret from the
Oracle Identity Cloud Service console.
1. Open the Oracle Cloud Infrastructure Console, click Identity, and then click
Federation.
2. On the list of identity providers, click the name of the Oracle Identity Cloud Service
provider (for example: OracleIdentityCloudService).
3. Click Edit Mapping.
4. When prompted, provide the client ID and client secret for the Oracle Identity Cloud
Service application, and then click Continue. The Edit Identity Provider dialog
displays any existing mappings.
5. Click + Add Mapping.
6. Select the Oracle Identity Cloud Service group from the list under Identity Provider
Group.
7. Select the IAM group you want to map from the list under Oracle Cloud
Infrastructure Group. If you instead want to create a new IAM group, select New
OCI Group and enter the name of the new group in New OCI Group Name. The new
group will be automatically created in IAM and mapped to the Oracle Identity Cloud
Service group. It will also automatically be given this description, which you can't
change: "Group created during federation".
8. Repeat the + Add Mapping steps for each mapping you want to create, and then click
Submit.
If you haven't already, set up IAM policies to control the access the federated users have to
your organization's Oracle Cloud Infrastructure resources. For more information, see Getting
Started with Policies and Common Policies.
of your federated identity provider (Oracle Identity Cloud Service, in this case). To sign out of
Oracle Identity Cloud Service, you need to go to your My Services console or to the Oracle
Identity Cloud Service console and click Sign Out from there
Your organization can have multiple Active Directory accounts (e.g., one for each division of
the organization). You can federate multiple Active Directory accounts with Oracle Cloud
Infrastructure, but each federation trust that you set up must be for a single Active Directory
account.
To federate with Active Directory, you set up a trust between Active Directory and Oracle
Cloud Infrastructure. To set up this trust, you perform some steps in the Oracle Cloud
Infrastructure Console and some steps in Active Directory Federation Services.
Following is the general process an administrator goes through to set up federation with
Active Directory. Details for each step are given in the sections below.
Prerequisites
You have installed and configured Microsoft Active Directory Federation Services for your
organization.
You have set up groups in Active Directory to map to groups in Oracle Cloud Infrastructure.
Summary: Get the SAML metadata document and the names of the Active Directory groups
that you want to map to Oracle Cloud Infrastructure Identity and Access Management groups.
1. Locate the SAML metadata document for your AD FS federation server. By default, it is
located at this URL:
https://<yourservername>/FederationMetadata/2007-06/FederationMetadata.xml
Download this document and make a note of where you save it. You will upload this
document to the Console in the next step.
2. Note all the Active Directory groups that you want to map to Oracle Cloud Infrastructure
IAM groups. You will need to enter these in the Console in the next step.
Summary: Add the identity provider to your tenancy. You can set up the group mappings at
the same time, or set them up later.
1. Go to the Console and sign in with your Oracle Cloud Infrastructure login and password.
2. Click Identity, and then click Federation.
3. Click Add identity provider.
4. Enter the following:
a. Display Name: A unique name for this federation trust. This is the name
federated users see when choosing which identity provider to use when signing in
to the Console (e.g., ABCCorp_ADFS shown in the screenshot in Experience for
Federated Users). The name must be unique across all identity providers you add
to the tenancy. You cannot change this later.
b. Description: A friendly description.
c. Type: Select Active Directory Federation Services.
d. XML: Upload the FederationMetadata.xml file you downloaded in Step 1.
5. Click Continue.
6. Set up the mappings between Active Directory groups and IAM groups in Oracle Cloud
Infrastructure. A given Active Directory group can be mapped to zero, one, or multiple
IAM Service groups, and vice versa. However, each individual mapping is between only
a single Active Directory group and a single IAM group. Changes to group mappings take
effect typically within seconds in your home region, but may take several minutes to
propagate to all regions.
b. Repeat the above sub-steps for each mapping you want to create, and then click
Create.
The identity provider is now added to your tenancy and appears in the list on the Federation
page. Click the identity provider to view its details and the group mappings you just set up.
Oracle assigns the identity provider and each group mapping a unique ID called an Oracle
Cloud ID (OCID). For more information, see Resource Identifiers.
In the future, come to the Federation page if you want to edit the group mappings or delete
the identity provider from your tenancy.
S TEP 3: COPY THE URL FOR THE ORACLE CLOUD I NFRASTRUCTURE FEDERATION METADATA DOCUMENT
Summary: The Federation page displays a link to the Oracle Cloud Infrastructure Federation
Metadata document. Before you move on to configuring Active Directory Federation Services,
S TEP 4: I N ACTIVE DIRECTORY FEDERATION S ERVICES, ADD ORACLE CLOUD I NFRASTRUCTURE AS A
1. Go to the AD FS Management Console and sign in to the account you want to federate.
2. Add Oracle Cloud Infrastructure as a trusted relying party:
a. From the AD FS Management Console, right-click AD FS and select Add Relying
Party Trust.
b. In the Add Relying Party Trust Wizard, click Start.
c. Select Import data about the relying party published online or on a local
network.
Paste the Oracle Cloud Infrastructure Federation Metadata URL that you copied in
Step 3. Click Next.
AD FS will connect to the URL. If you get an error during the attempt to read the
federation metadata, you can alternatively upload the Oracle Cloud Infrastructure
Federation Metadata XML document.
v. Click Next.
d. Set the display name for the relying party (e.g., Oracle Cloud Infrastructure) and
then click Next.
e. Select I do not want to configure multi-factor authentication settings for
this relying party trust at this time.
f. Choose the appropriate Issuance Authorization Rules to either permit or deny all
users access to the relying party. Note that if you choose "Deny", then you must
later add the authorization rules to enable access for the appropriate users.
Click Next.
g. Review the settings and click Next.
h. Check Open the Edit Claim Rules dialog for this relying part trust when the
wizard closes and then click Close.
S TEP 5: ADD THE CLAIM RULES FOR THE ORACLE CLOUD I NFRASTRUCTURE RELYING PARTY
Summary: Add the claim rules so that the elements that Oracle Cloud Infrastructure requires
(Name ID and groups) are added to the SAML authentication response.
1. In the Add Transform Claim Rule Wizard, select Transform an Incoming Claim,
and click Next.
2. Enter the following:
l Claim rule name: Enter a name for this rule, e.g., nameid.
l Incoming claim type: Select Windows account name.
l Outgoing claim type: Select Name ID.
l Outgoing name ID format: Select Persistent Identifier.
l Select Pass through all claim value.
l Click Finish.
3. The rule is displayed in the rules list. Click Add Rule.
1. Under Claim rule template, select Send Claims Using a Custom Rule. Click Next.
2. In the Add Transform Claim Rule Wizard, enter the following:
a. Claim rule name: Enter groups.
b. Custom rule: Enter the following custom rule:
c:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname",
Issuer == "AD AUTHORITY"] => issue(store = "Active Directory", types =
("https://auth.oraclecloud.com/saml/claims/groupName"), query = ";tokenGroups;{0}", param
= c.Value);
c. Click Finish.
To limit the groups sent to Oracle Cloud Infrastructure, create two custom claim rules. The
first one retrieves all groups the user belongs to directly and indirectly. The second rule
applies a filter to limit the groups passed to the service provider to only those that match the
filter criteria.
Note that in this custom rule you use add instead of issue. This command passes
the results of the rule to the next rule, instead of sending the results to the service
provider.
c. Click Finish.
4. Now add the filter rule.
a. In the Edit Claim Rules dialog, click Add Rule.
b. Under Claim rule template, select Send Claims Using a Custom Rule. Click
Next.
c. In the Add Transform Claim Rule Wizard, enter the following:
a. Claim rule name: Enter groups.
b. Custom rule: Enter an appropriate filter rule. For example to send only
groups that begin with the string "OCI", enter the following:
c:[Type == "http://schemas.xmlsoap.org/claims/Group", Value =~ "(?i)OCI"] => issue
(claim = c);
This rule filters the list from the first rule to only those groups that begin
with the string OCI. The issue command, sends the results of the rule to the
service provider.
You can create filters with the appropriate criteria for your organization.
For information on AD FS syntax for custom rules, see the Microsoft
document: Understanding Claim Rule Language in AD FS 2.0 and Higher.
c. Click Finish.
If you haven't already, set up IAM policies to control the access the federated users have to
your organization's Oracle Cloud Infrastructure resources. For more information, see Getting
Started with Policies and Common Policies.
S TEP 7: GIVE YOUR FEDERATED USERS THE NAME OF THE TENANT AND URL TO SIGN IN
The federated users need the URL for the Oracle Cloud Infrastructure Console (for example,
https://console.us-ashburn-1.oraclecloud.com) and the name of your tenant. They'll be
prompted to provide the tenant name when they sign in to the Console.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
Identity providers:
l CreateIdentityProvider
l ListIdentityProviders
l GetIdentityProvider
l UpdateIdentityProvider
l DeleteIdentityProvider: Before you can use this operation, you must first use
DeleteIdpGroupMapping to remove all the group mappings for the identity provider.
Group mappings:
l CreateIdpGroupMapping: Each group mapping is a separate entity with its own OCID.
l ListIdpGroupMappings
l GetIdpGroupMapping
l UpdateIdpGroupMapping
l DeleteIdpGroupMapping
Introduction
This procedure describes how you can authorize an instance to make API calls in Oracle Cloud
Infrastructure services. After you set up the required resources and policies, an application
running on an instance can call Oracle Cloud Infrastructure public services, removing the need
to configure user credentials or a configuration file.
Concepts
DYNAMIC GROUP
Dynamic groups allow you to group Oracle Cloud Infrastructure instances as principal
actors, similar to user groups. You can then create policies to permit instances in these
groups to make API calls against Oracle Cloud Infrastructure services. Membership in the
group is determined by a set of criteria you define, called matching rules.
MATCHING RULE
When you set up a dynamic group, you also define the rules for membership in the group.
Resources that match the rule criteria are members of the dynamic group. Matching rules
have a specific syntax you follow. See Writing Matching Rules to Define Dynamic Groups.
l Compute
l Block Volume
l Networking
l Load Balancing
l Object Storage
Security Considerations
Any user who has access to the instance (who can SSH to the instance), automatically inherits
the privileges granted to the instance. Before you grant permissions to an instance using this
procedure, ensure that you know who can access it, and that they should be authorized with
the permissions you are granting to the instance.
Process Overview
The following steps summarize the process flow for setting up and using instances as
principals. The subsequent sections provide more details.
1. Create a dynamic group. In the dynamic group definition, you provide the matching
rules to specify which instances you want to allow to make API calls against services.
2. Create a policy granting permissions to the dynamic group to access services in your
tenancy (or compartment).
3. A developer in your organization configures the application built using the Oracle Cloud
Infrastructure Java or Python SDK to authenticate using the instance principals
provider. The developer deploys the application and the SDK to all the instances that
belong to the dynamic group.
4. The deployed SDK makes calls to Oracle Cloud Infrastructure APIs as allowed by the
policy (without needing to configure API credentials).
5. For each API call made by an instance, the Audit service logs the event, recording the
OCID of the instance as the value of principalId in the event log.
After you have created a dynamic group, you need to create policies to permit the dynamic
groups to access Oracle Cloud Infrastructure services.
Policy for dynamic groups follows the syntax described in How Policies Work. Review that
topic to understand basic policy features.
For Java:
InstancePrincipalsAuthenticationDetailsProvider provider =
InstancePrincipalsAuthenticationDetailsProvider.builder().build();
...
For Python:
# In the base case, configuration does not need to be provided as the region and tenancy are obtained
from the InstancePrincipalsSecurityTokenSigner
identity_client = oci.identity.IdentityClient(config={}, signer=signer)
...
FAQs
How do I query the instance metadata service to query the certificate on the
instance?
Use this curl command: curl http://169.254.169.254/opc/v1/identity/cert.pem
The above rule includes all instances in the compartment except those with the OCIDs
specified.
Managing Users
This topic describes the basics of working with users.
You can create a policy that gives someone power to create new users and credentials, but not
control which groups those users are in. See Let the Help Desk Manage Users.
For the reverse: You can create a policy that gives someone power to determine what groups
users are in, but not create or delete users. See Let Group Admins Manage Group
Membership.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for users or other IAM components, see Details for IAM.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
You might want to use a name that's already in use by your company's own identity system
(e.g., Active Directory, LDAP, etc.). You must also provide the user with a description
(although it can be an empty string), which is a non-unique, changeable description for the
user. This could be the user's full name, a nickname, or other descriptive information. Oracle
will also assign the user a unique ID called an Oracle Cloud ID (OCID). For more information,
see Resource Identifiers.
Note: If you delete a user and then create a new user with
the same name, they'll be considered different users
because they'll have different OCIDs.
A new user has no permissions until you place the user in one or more groups, and there's at
least one policy that gives that group permission to either the tenancy or a compartment.
Exception: each user can manage their own credentials. An administrator does not need to
create a policy to give a user that ability. For more information, see User Credentials.
You also need to give the new user some credentials so they can access Oracle Cloud
Infrastructure. A user can have one or both of the following credentials, depending on the type
of access they need: A password for using the Console, and an API signing key for using the
API. For information about working with user credentials, see Managing User Credentials.
If a user tries 10 times in a row to sign in to the Console unsuccessfully, they will be
automatically blocked from further sign-in attempts. An administrator can unblock the user in
the Console (see To unblock a user) or with the UpdateUserState API operation.
You can delete a user, but only if the user is not a member of any groups.
For information about the number of users you can have, see Service Limits.
To create a user
1. Open the Console, click Identity, and then click Users.
A list of the users in your tenancy is displayed.
2. Click Create User.
3. Enter the following:
l Name: A unique name or email address for the user (for tips on what value to
use, see Working with Users). The name must be unique across all users in your
tenancy. You cannot change this later.
l Description: This could be the user's full name, a nickname, or other descriptive
information. You can change this later if you want to.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
4. Click Create.
Next, you need to give the user permissions by adding them to at least one group. You also
need to give the user the credentials they need (see Managing User Credentials).
Make sure to let the user know which compartment(s) they have access to.
To delete a user
Prerequisite: To delete a user, the user must not be in any groups.
To unblock a user
If you're an administrator, you can use the following procedure to unblock a user who has
tried 10 times in a row to sign in to the Console unsuccessfully.
For information about managing user credentials in the Console, see Managing User
Credentials.
l CreateUser
l ListUsers
l GetUser
l UpdateUserState: Unblocks a user who has tried to sign in 10 times in a row
unsuccessfully.
l UpdateUser: You can update only the user's description.
l DeleteUser
For information about the API operations for managing user credentials, see Managing User
Credentials.
Managing Groups
This topic describes the basics of working with groups.
For a policy that only gives someone power to determine what groups users are in, see Let
Group Admins Manage Group Membership.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for groups or other IAM components, see Details for IAM.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
A group has no permissions until you write at least one policy that gives that group permission
to either the tenancy or a compartment. When writing the policy, you can specify the group by
using either the unique name or the group's OCID. Per the preceding note, even if you specify
the group name in the policy, IAM internally uses the OCID to determine the group. For
information about writing policies, see Managing Policies.
For information about the number of groups you can have, see Service Limits.
If you're federating with an identity provider, you'll create mappings between the identity
provider's groups and your IAM groups. For more information, see Identity Providers and
Federation.
To create a group
1. Open the Console, click Identity, and then click Groups.
A list of the groups in your tenancy is displayed.
2. Click Create Group.
3. Enter the following:
l Name: A unique name for the group. The name must be unique across all groups
in your tenancy. You cannot change this later.
l Description: A friendly description. You can change this later if you want to.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
4. Click Create Group.
Next, you might want to add users to the group, or write a policy for the group. See To create
a policy.
To delete a group
Prerequisite: To delete a group, it must not have any users in it.
l CreateGroup
l ListGroups
l GetGroup
l UpdateGroup: You can update only the group's description.
l DeleteGroup
l ListUserGroupMemberships: Use to get a list of which users are in a group, or which
groups a user is in.
l AddUserToGroup: This operation results in a UserGroupMembership object with its own
OCID.
l GetUserGroupMembership
l RemoveUserFromGroup: This operation deletes a UserGroupMembership object.
For API operations related to group mappings for identity providers, see Identity Providers
and Federation.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for dynamic groups or other IAM components, see Details
for IAM.
A dynamic group has no permissions until you write at least one policy that gives that dynamic
group permission to either the tenancy or a compartment. When writing the policy, you can
specify the dynamic group by using either the unique name or the dynamic group's OCID. Per
the preceding note, even if you specify the dynamic group name in the policy, IAM internally
uses the OCID to determine the dynamic group. For information about writing policies, see
Managing Policies.
You can delete a dynamic group, but only if the group is empty.
For information about the number of dynamic groups you can have, see Service Limits.
l Name: A unique name for the group. The name must be unique across all groups
in your tenancy (dynamic groups and user groups). You can't change this later.
l Description: A friendly description. You can't change this in the Console, but you
can change it Using the API.
4. Enter the Matching Rules. Resources that meet the rule criteria are members of the
group.
l Rule 1: Enter a rule following the guidelines in Writing Matching Rules to Define
Dynamic Groups. You can manually enter the rule in the text box or launch the
rule builder.
l Enter additional rules as needed. To add a rule, click +Additional Rule.
5. Click Create Dynamic Group.
The matching rule syntax is verified, but the OCIDs are not. Be sure that the OCIDs you
enter are correct.
Next, to give the dynamic group permissions, you need to write a policy. See Writing Policies
for Dynamic Groups.
To include all instances that are in a specific compartment, add a rule with the following
syntax:
instance.compartment.id = '<compartment_ocid>'
You can add that rule either directly in the text box, or you can use the rule builder.
l Select ALL.
l Attribute: Select in Compartment ID.
l Value: Enter
ocidv1:compartment:oc1:phx:1457972483881:aaaaaa6q6igvfauxmima74jv2umircg
sua
All instances that currently exist or get created in the compartment (identified by the OCID)
are members of this group.
To include all instances that reside in any of two (or more) compartments, add a rule with the
following syntax:
Any {instance.compartment.id = '<compartment_ocid>', instance.compartment.id = '<compartment_ocid>'}
You can add that rule either directly in the text box, or you can use the rule builder.
1. Select ANY.
2. Enter:
l Attribute: Select in Compartment ID.
l Value: Enter
ocidv1:compartment:oc1:phx:1457972483881:aaaaaa6q6igvfauxmima74jv2u
mircgsua
Instances that currently exist or get created in either of the specified compartments are
members of this group.
l CreateDynamicGroup
l ListDynamicGroups
l GetDynamicGroup
l UpdateDynamicGroup
l DeleteDynamicGroup
Managing Compartments
This topic describes the basics of working with compartments.
For an additional policy related to compartment management, see Let a Compartment Admin
Manage the Compartment.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for compartments or other IAM components, see Details for
IAM.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
The Console is designed to display your resources by compartment within the current region.
When you work with your resources in the Console, you must choose which compartment to
work in from a list on the left side of the page. That list includes all compartments in the
tenancy (including the tenancy, which is the root compartment), regardless of whether you
have permission to work with the resources inside that compartment. If you're an
administrator, you'll have permission to work with any compartment's resources, but if you're
a user with limited access, you probably won't. If a user tries to access a compartment they
don't have permission for, they'll get an error.
When creating a new compartment, you must provide a unique name for it (maximum 100
characters, including letters, numbers, periods, hyphens, and underscores). The name must
be unique across all compartments in your tenancy. You must also provide a description
(although it can be an empty string), which is a non-unique, changeable description for the
compartment. Oracle will also assign the compartment a unique ID called an Oracle Cloud ID.
For more information, see Resource Identifiers.
After creating a compartment, you need to write at least one policy for it, otherwise no one
can access it (except administrators who have permission to the tenancy). When creating the
policy, you need to specify which compartment to attach it to. This controls who can later
modify or delete the policy. Depending on how you've designed your compartments, you
might attach it to the tenancy, or to the specific compartment itself. For more information,
see Policy Attachment.
To place a new resource in a compartment, you simply specify that compartment when
creating the resource (the compartment is one of the required pieces of information to create
a resource). If you're working in the Console, you just make sure you're first viewing the
compartment where you want to create the resource. Keep in mind that most IAM resources
reside in the tenancy (this includes users, groups, compartments, and any policies attached to
the tenancy). Notice that you can't move a resource from one compartment to another.
It's not possible to get a list of all the resources in a compartment by using a single API call.
Instead you can list all the resources of a given type in the compartment (e.g., all the
instances, all the block storage volumes, etc.).
Compartments cannot be deleted. If you no longer need to use a particular compartment, you
may remove all the resources from it, and modify or delete all policies that refer to it so that
it cannot be used. You can also rename it to change its position in the list.
For information about the number of compartments you can have, see Service Limits.
To create a compartment
Remember: Compartments can't be deleted.
l Name: A unique name for the compartment (maximum 100 characters, including
letters, numbers, periods, hyphens, and underscores). The name must be unique
across all the compartments in your tenancy. Avoid entering confidential
information.
l Description: A friendly description. You can change this later if you want to.
Avoid entering confidential information.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
4. Click Create Compartment.
Next, you might want to write a policy for the compartment. See To create a policy.
2. For the compartment you want to rename, click the Actions icon ( ), and then click
Rename Compartment.
3. Enter the new Name. The name must be unique across all the compartments in your
tenancy. The name can have a maximum of 100 characters, including letters, numbers,
periods, hyphens, and underscores. Avoid entering confidential information.
4. Click Rename Compartment.
Remember that most IAM resources reside in the tenancy (this includes users, groups, and
compartments). Policies can reside in either the tenancy (root compartment) or other
compartments.
l CreateCompartment
l ListCompartments
l GetCompartment: Returns the metadata for the compartment, not its contents.
l UpdateCompartment: You can update the compartment's name, description, and tags.
You can retrieve the contents of a compartment only by resource type. There's no API call that
lists all resources in the compartment. For example, to list all the instances in a
compartment, call the Core Services API ListInstances operation and specify the compartment
ID as a query parameter.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for groups or other IAM components, see Details for IAM.
Overview of Tags
Oracle Cloud Infrastructure supports two kinds of tags: free-form tags and defined tags.
Free-form Tags
Environment: Production
You can apply multiple free-form tags to a single resource (up to the limit).
Because free-from tags are limited in functionality, Oracle recommends you only use them
when you are first getting started with tagging, to try out the tagging feature in your system.
For more information about the features and limitations of free-form tags, see Working with
Free-form Tags.
Defined Tags
Defined tags provide more features and control than free-form tags. Before you create a
defined tag key, you first set up a namespace for it. You can think of the namespace as a
container for a set of tag keys. Defined tags support policy to allow you to control who can
apply your defined tags. The namespace is the entity to which you can apply policy.
To apply a defined tag to a resource, a user first selects the namespace, then the tag key
within the namespace, and then they can assign the value. Administrators can control which
groups of users are allowed to use each namespace.
The following diagrams illustrate defined tags. Two namespaces are set up: Operations and
HumanResources. The tag keys are defined in the namespaces. Within each namespace, the
tag keys must be unique, but a tag key name can be repeated across namespaces. In the
example, both namespaces include a key named "Environment".
The first instance is tagged with two tags from the Operations namespace, indicating that it
belongs to the Operations production environment and the Operations project "Alpha". The
second instance is tagged with tags from both the HumanResources namespace and the
Tagging Concepts
Here's a list of the basic tagging concepts:
NAMESPACE
You can think of a tag namespace as a container for your tag keys. It consists of a name,
and zero or more tag key definitions. Tag namespaces are not case sensitive, and must be
unique across the tenancy. The namespace is also a natural grouping to which
administrators can apply policy. One policy on the tag namespace applies to all the tag
definitions contained in it.
KEY
The name you use to refer to the tag. Tag keys are case insensitive (for example,"
mytagkey" duplicates "MyTagKey"). Tag keys for defined tags must be created in a
namespace. A tag key must be unique within a namespace.
KEY DEFINITION
A key definition defines the schema of a tag and includes a namespace, tag key, and tag
value type.
TAG VALUE
The tag value is the value you give to the tag key. In the example:
Operations.CostCenter=42
Operations is the namespace, CostCenter is the tag key, and 42 is the tag value. A tag
value is optional.
FREE-FORM TAG
A basic metadata association that consists of a key and a value only. Free-form tags have
limited functionality. See Working with Free-form Tags.
RETIRE
You can't delete a tag key definition or a tag namespace. Instead, you retire them. Retired
tag namespaces and key definitions can no longer be applied to resources. However, retired
tags are not removed from the resources to which they have already been applied. You can
still specify retired tags when searching, filtering, reporting, and so on.
REACTIVATE
You can reactivate a tag namespace or tag key definition that has been retired to reinstate
its usage in your tenancy.
Taggable Resources
The following table lists resources that support tagging. This table will be updated as tagging
support is added for more resources.
volume_backups
Compute instance
instance-image
instanceconsoleconnections
IAM groups
users
compartments
policies
tag-namespaces (API only)
tag-definitions (API only)
Networking vcns
route-tables
security-lists
dhcp-options
subnets
private-ips
l When applying a free-form tag, you can't see a list of existing free-form tags, so you
don't know what tags and values have already been used.
l You can't see a list of existing free-form tags in your tenancy.
l You can't use free-form tags to control access to resources (that is, you can't include
free-form tags in IAM policies).
The use permission for a resource grants permissions to apply tags, update tags, and delete
free-form tags for that resource. For example, users who can use instances in
CompartmentA, can also apply, update, or delete free-form tags on instances in
CompartmentA.
The inspect permission for a resource grants permissions to view free-form tags for that
resource. So users who can view instances in CompartmentA can also view any free-form
tags applied to the instance.
To apply, update, or delete defined tags for a resource, a user must be granted permissions
on the resource and permissions to use the tag namespace.
Users must be granted the use permission on the defined tag's namespace to apply, update,
or delete the defined tag for a resource. The user must also have the use permission for the
resource.
The inspect permission for a resource grants permissions to view defined tags for that
resource. For example, users who can view instances can also view any defined tags applied
to the instance.
Example Scenario
Your company has an Operations department. Within the Operations department are several
cost centers. You want to be able to tag resources that belong to the Operations department
with the appropriate cost center.
Alice already belongs to the group InstanceLaunchers. Alice can manage instances in
CompartmentA. You want Alice and other members of the InstanceLaunchers group to be able
to apply the Operations.CostCenter tag to instances in CompartmentA.
To grant the InstanceLaunchers group access to the Operations namespace, add the following
statements to the InstanceLaunchers policy:
Allow group InstanceLaunchers to use tag-namespaces in CompartmentA where target.tag-
namespace.name="Operations"
When you retire a tag key definition, you can no longer apply it to resources. However, the
tag is not removed from the resources that it was applied to. The tag still exists as metadata
on those resources and you can still call the retired tag in operations (such as listing, sorting,
or reporting).
When you retire a tag namespace, all the tag keys in the namespace are retired. As described
above, this means that all tags in the namespace can no longer be applied to resources,
though existing tags are not removed. No new keys can be created in the retired namespace.
When you reactivate a tag key, it is again available for you to apply to resources.
When you reactivate a tag namespace, you can once again create tag key definitions in the
namespace. However, if you want to use any of the tag key definitions that were retired with
the namespace, you must explicitly reactivate each tag key definition.
Limits on Tags
See Service Limits for a list of applicable limits and instructions for requesting a limit
increase.
l Tag Key: Enter the key. The key can be up to 100 characters in length. Tag keys
are case insensitive and must be unique within the namespace.
l Description: Enter a friendly description.
5. Click Create Tag Key Definition.
The key definition's details are displayed. The description is displayed under the key
definition's name.
4. Click the pencil next to the description.
5. Edit the description and save it.
l GetTagNamespace
l ListTagNamespaces
l CreateTagNamespace
l UpdateTagNamespace - use to retire or reactivate a namespace
Managing Regions
This topic describes the basics of managing your region subscriptions. For more information
about regions in Oracle Cloud Infrastructure, see Regions and Availability Domains.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for managing regions or other IAM components, see Details
for IAM.
Resources that you can create and update only in the home region are:
l Users
l Groups
l Policies
l Compartments
l Federation resources
When you use the API to update your IAM resources, you must use the endpoint for your home
region. IAM automatically propagates the updates to all regions in your tenancy.
When you use the Console to update your IAM resources, the Console sends the requests to
the home region for you. You don't need to switch to your home region first. IAM then
automatically propagates the updates to all regions in your tenancy.
When you subscribe your tenancy to a new region, all the policies from your home region are
enforced in the new region. If you want to limit access for groups of users to specific regions,
you can write policies to grant access to specific regions only. For an example policy, see
Restrict Admin Access to a Specific Region.
To subscribe to a region
1. Open the Console, click the Region menu, and then click Manage Regions.
The list of regions offered by Oracle Cloud Infrastructure is displayed. Your home
region is labeled.
2. Locate the region you want to subscribe to and click Subscribe to Region.
Note that it could take several minutes to activate your tenancy in the new region.
Remember, your IAM resources are global, so when the subscription becomes active,
all your existing policies are enforced in the new region.
To switch to the new region, use the region selector in the Console. See Switching
Regions for more information.
l GetTenancy
l ListRegions: Returns a list of regions offered by Oracle Cloud Infrastructure.
l CreateRegionSubscription
l ListRegionSubscriptions
Region FAQs
Managing Policies
This topic describes the basics of working with policies.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies to control who else can write policies or manage other IAM
components, see Let a Compartment Admin Manage the Compartment, and also Details for
IAM.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
When creating a policy, you must specify the compartment where it should be attached, which
is either the tenancy (the root compartment) or another compartment. Where it's attached
governs who can later modify or delete it. For more information, see Policy Attachment. When
creating the policy in the Console, you attach the policy to the desired compartment by
creating the policy while viewing that compartment. If you're using the API, you specify the
identifier of the desired compartment in the CreatePolicy request.
Also when creating a policy, you must specify its version date. For more information, see
Policy Language Version. You can change the version date later if you like.
When creating a policy, you must also provide a unique, non-changeable name for it. The
name must be unique across all policies in your tenancy. You must also provide a description
(although it can be an empty string), which is a non-unique, changeable description for the
policy. Oracle will also assign the policy a unique ID called an Oracle Cloud ID. For more
information, see Resource Identifiers.
For information about how to write a policy, see How Policies Work and Policy Syntax.
When you create a policy, make changes to an existing policy, or delete a policy, your
changes go into effect typically within 10 seconds.
You can view a list of your policies in the Console or with the API. In the Console, the list is
automatically filtered to show only the policies attached to the compartment you're viewing.
To determine which policies apply to a particular group, you must view the individual
statements inside all your policies. There isn't a way to automatically obtain that information
in the Console or API.
For information about the number of policies you can have, see Service Limits.
To create a policy
Prerequisite: The group and compartment that you're writing the policy for must already
exist.
2. If you want to attach the policy to a compartment other than the one you're viewing,
select the desired compartment from the list on the left. Where the policy is attached
controls who can later modify or delete it (see Policy Attachment).
3. Click Create Policy.
4. Enter the following:
l Name: A unique name for the policy. The name must be unique across all policies
in your tenancy. You cannot change this later.
l Description: A friendly description. You can change this later if you want to.
l Policy Versioning: Select Keep Policy Current if you'd like the policy to stay
current with any future changes to the service's definitions of verbs and
resources. Or if you'd prefer to limit access according to the definitions that were
current on a specific date, select Use Version Date and enter that date in format
YYYY-MM-DD format. For more information, see Policy Language Version.
l Statement: A policy statement. For the correct format to use, see Policy Basics
and also Policy Syntax. If you want to add more than one statement, click +.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
5. Click Create.
To determine which policies apply to a particular group, you must view the individual
statements inside all your policies. There isn't a way to automatically obtain that information
in the Console.
To delete a policy
l CreatePolicy
l ListPolicies
l GetPolicy
l UpdatePolicy
l DeletePolicy
To manage credentials for users other than yourself, you must be in the Administrators group
or some other group that has permission to work with the tenancy. Having permission to work
with a compartment within the tenancy is not sufficient. For more information, see The
Administrators Group and Policy.
IAM administrators (or anyone with permission to the tenancy) can use either the Console or
the API to manage all aspects of both types of credentials, for themselves and all other users.
This includes creating an initial one-time password for a new user, resetting a password,
uploading API keys, and deleting API keys.
Users who are not administrators can manage their own credentials. In the Console, users
can:
Each user automatically has the ability to create, update, and delete their own Swift
passwords in the Console or the API. An administrator does not need to create a policy to give
a user those abilities. Administrators (or anyone with permission to the tenancy) also have
the ability to manage Swift passwords for other users.
Any user of a Swift client that integrates with Object Storage needs permission to work with
the service. If you're not sure if you have permission, contact your administrator. For
information about policies, see How Policies Work. For basic policies that enable use of Object
Storage, see Common Policies.
Swift passwords do not expire. Each user can have up to two Swift passwords at a time. To
get a Swift password in the Console, see To create a Swift password.
Each user automatically has the ability to create, update, and delete their own Amazon S3
Compatibility API keys in the Console or the API. An administrator does not need to create a
policy to give a user those abilities. Administrators (or anyone with permission to the
tenancy) also have the ability to manage Amazon S3 Compatibility API keys for other users.
Any user of the Amazon S3 Compatibility API with Object Storage needs permission to work
with the service. If you're not sure if you have permission, contact your administrator. For
information about policies, see How Policies Work. For basic policies that enable use of Object
Storage, see Common Policies.
Amazon S3 Compatibility API keys do not expire. Each user can have up to two Amazon S3
Compatibility API keys at a time. To create an Amazon S3 Compatibility API key using the
Console, see To create an Amazon S3 Compatibility API Key.
1. In the top-right corner of the Console, click your user's name, and then click Change
Password.
2. Enter the current password and the new password, and then click Save New
Password.
To unblock a user
If you're an administrator, you can unblock a user who has tried 10 times in a row to sign in to
the Console unsuccessfully. See To unblock a user.
If you're an administrator creating a Swift password for another user, you need to securely
deliver it to the user by providing it verbally, printing it out, or sending it through a secure
email service.
The Amazon S3 Compatibility API key is no longer available to use with the Amazon S3
Compatibility API.
l ListApiKeys
l UploadApiKey
l DeleteApiKey
l CreateSwiftPassword
l UpdateSwiftPassword: You can only update the Swift password's description, not
change the password string itself.
l ListSwiftPasswords
l DeleteSwiftPassword
l CreateCustomerSecretKey
l UpdateCustomerSecretKey: You can only update the secret key's description, not
change the key itself.
l ListCustomerSecretKeys
l DeleteCustomerSecretKey
A load balancer improves resource utilization, facilitates scaling, and helps ensure high
availability. You can configure multiple load balancing policies and application-specific health
checks to ensure that the load balancer directs traffic only to healthy instances. The load
balancer can reduce your maintenance window by draining traffic from an unhealthy
application server before you remove it from service for maintenance.
To accept traffic from the internet, you create a public load balancer. The service assigns it a
public IP address that serves as the entry point for incoming traffic. You can associate the
public IP address with a friendly DNS name through any DNS vendor.
A public load balancer is regional in scope and requires two subnets, each in a separate
availability domain. One subnet hosts the primary load balancer and the other hosts a standby
load balancer to ensure accessibility even during an availability domain outage. Each load
balancer requires one private IP address from its host subnet. The Load Balancing service
attaches a floating public IP address to one of the specified subnets. (The floating public IP
address does not come from your backend subnets.) If there is a failure in that subnet's
availability domain, the load balancer and public IP address switch to the other subnet. The
service treats the two load balancer subnets as equivalent and you cannot denote one as
"primary".
To isolate your load balancer from the internet and simplify your security posture, you can
create a private load balancer. The Load Balancing service assigns it a private IP address that
serves as the entry point for incoming traffic.
When you create a private load balancer, the service requires only one subnet to host both the
primary and standby load balancers. The assigned floating private IP address is local to the
specified subnet. The load balancer is accessible only from within the VCN that contains the
associated subnet, or as further restricted by your security list rules.
A private load balancer is local to the availability domain. The primary and standby load
balancers exist within the same subnet. Each load balancer requires a private IP address from
that subnet in addition to the assigned floating private IP address. If there is an availability
domain outage, the load balancer has no failover.
Your load balancer has a backend set to route incoming traffic to your Compute instances. The
backend set is a logical entity that includes:
The backend servers (Compute instances) associated with a backend set can exist anywhere,
as long as the associated security lists and route tables allow the intended traffic flow.
Every subnet within your VCN has security lists and a route table. Rules within the security
lists determine whether a subnet can accept data traffic from the internet or from another
subnet. When you add backend servers to a backend set, the Load Balancing service can
suggest appropriate security list rules, or you can configure them yourself through the
Networking service. See Security Lists for more information.
Oracle recommends that you distribute your backend servers across all availability domains
within the region.
l Create a VCN with an internet gateway and at least two public subnets for a public load
balancer. Each subnet must reside in a separate availability domain.
l Create a VCN with at least one subnet for a private load balancer.
l Create at least two Compute instances, each in a separate availability domain.
l Create a load balancer.
l Create a backend set with a health check policy.
l Add backend servers (Compute instances) to the backend set.
l Create a listener, with optional SSL handling.
l Update the load balancer subnet security list so it allows the intended traffic.
The following diagram provides a high-level view of a simple public load balancing system
configuration. Far more sophisticated and complex configurations are common.
The Load Balancing service manages application traffic across availability domains within
a region. A region is a localized geographic area, and an availability domain is one or
more data centers located within a region. A region is composed of several availability
domains.
BACKEND SERVER
An application server responsible for generating content in reply to the incoming TCP or
HTTP traffic. You typically identify application servers with a unique combination of
overlay (private) IPv4 address and port, for example, 10.10.10.1:8080 and
10.10.10.2:8080.
BACKEND SET
A logical entity defined by a list of backend servers, a load balancing policy, and a health
check policy. SSL configuration is optional. The backend set determines how the load
balancer directs traffic to the collection of backend servers.
CERTIFICATES
If you use HTTPS or SSL for your listener, you must associate an SSL server certificate
(X.509) with your load balancer. A certificate enables the load balancer to terminate the
connection and decrypt incoming requests before passing them to the backend servers.
HEALTH CHECK
A test to confirm the availability of backend servers. A health check can be a request or a
connection attempt. Based on a time-interval you specify, the load balancer applies the
health check policy to continuously monitor backend servers. If a server fails the health
check, the load balancer takes the server temporarily out of rotation. If the server
subsequently passes the health check, the load balancer returns it to the rotation.
You configure your health check policy when you create a backend set. You can configure
TCP-level or HTTP-level health checks for your backend servers.
l TCP-level health checks attempt to make a TCP connection with the backend servers
and validate the response based on the connection status.
l HTTP-level health checks send requests to the backend servers at a specific URI and
validate the response based on the status code or entity data (body) returned.
The service provides application-specific health check capabilities to help you increase
availability and reduce your application maintenance window.
For more information on health check configuration, see Editing Health Check Policies.
HEALTH STATUS
An indicator that reports the general health of your load balancers and their components.
For more information, see the Health Status section of Editing Health Check Policies.
LISTENER
A logical entity that checks for incoming traffic on the load balancer's IP address. You
configure a listener's protocol and port number, and the optional SSL settings. To handle
TCP, HTTP, and HTTPS traffic, you must configure multiple listeners.
l TCP
l HTTP/1.0
l HTTP/1.1
l HTTP/2
l WebSocket
A load balancing policy tells the load balancer how to distribute incoming traffic to the
backend servers. Common load balancer policies include:
l Round robin
l Least connections
l IP hash
A set of path route rules to route traffic to the correct backend set without using multiple
listeners or load balancers.
SHAPE
A template that determines the load balancer's total pre-provisioned maximum capacity
(bandwidth) for ingress plus egress traffic. Available shapes include 100 Mbps, 400 Mbps,
and 8000 Mbps.
SSL
Secure Sockets Layer (SSL) is a security technology for establishing an encrypted link
between a client and a server. You can apply the following SSL configurations to your load
balancer:
SSL TERMINATION
The load balancer handles incoming SSL traffic and passes the unencrypted request to
a backend server.
The load balancer terminates the SSL connection with an incoming traffic client, and
then initiates an SSL connection to a backend server.
SSL TUNNELING
If you configure the load balancer's listener for TCP traffic, the load balancer tunnels
incoming SSL connections to your application servers.
Load Balancing supports the TLS 1.2 protocol with a default setting of strong cipher
strength. The default supported ciphers include:
l ECDHE-RSA-AES256-GCM-SHA384
l ECDHE-RSA-AES256-SHA384
l ECDHE-RSA-AES128-GCM-SHA256
l ECDHE-RSA-AES128-SHA256
l DHE-RSA-AES256-GCM-SHA384
l DHE-RSA-AES256-SHA256
l DHE-RSA-AES128-GCM-SHA256
l DHE-RSA-AES128-SHA256
SESSION PERSISTENCE
A method to direct all requests originating from a single logical client to a single backend
web server.
SUBNET
A subdivision you define in a VCN, such as 10.0.0.0/24 and 10.0.1.0/24. Each subnet
exists in a single availability domain. A subnet consists of a contiguous range of IP
addresses that do not overlap with other subnets in the VCN. For each subnet, you specify
the routing rules and security lists that apply to it.
You must have at least two public subnets, in separate availability domains, within your
VCN to create a public load balancer. You cannot specify a private subnet for your public
load balancer. A private load balancer requires only one subnet.
For more information on subnets, see VCNs and Subnets and Public vs. Private Subnets.
VIRTUAL HOSTNAME
You need at least one virtual cloud network before you launch a load balancer.
For information about setting up virtual cloud networks, see Overview of Networking.
VISIBILITY
Specifies whether your load balancer is public or private. A public load balancer has a
public IP address that clients can access from the internet. A private load balancer has a
private IP address, from a VCN local subnet, that clients can access only from within your
VCN.
WORK REQUEST
The Load Balancing service handles requests asynchronously. Each request returns a work
request ID (OCID) as the response. You can view the work request item to see the status
of the request.
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
l You cannot dynamically change the load balancer shape to handle more incoming
traffic. You can use the API or Console to create a load balancer with the new shape
information.
l The Load Balancing service does not support IPv6 addresses.
l The maximum number of concurrent connections is limited when you use stateful
security list rules for your load balancer subnets. There is no limit on concurrent
connections if you use stateless security rules.
l Round Robin
l Least Connections
l IP Hash
When processing load or capacity varies among backend servers, you can refine each of these
policy types with backend server weighting. Weighting affects the proportion of requests
directed to each server. For example, a server weighted '3' receives three times the number
of connections as a server weighted '1'. You assign weights based on criteria of your choosing,
such as each server's traffic-handling capacity.
Load balancer policy decisions apply differently to TCP load balancers, cookie-based session
persistent HTTP requests (sticky requests), and non-sticky HTTP requests.
l A TCP load balancer considers policy and weight criteria to direct an initial incoming
request to a backend server. All subsequent packets on this connection go to the same
endpoint.
l An HTTP load balancer configured to handle cookie-based session persistence forwards
requests to the backend server specified by the cookie's session information.
l For non-sticky HTTP requests, the load balancer applies policy and weight criteria to
every incoming request and determines an appropriate backend server. Multiple
requests from the same client could be directed to different servers.
Round Robin
Round Robin is the default load balancer policy. This policy distributes incoming traffic
sequentially to each server in a backend set list. After each server has received a connection,
the load balancer repeats the list in the same order.
Round Robin is a simple load balancing algorithm. It works best when all the backend servers
have similar capacity and the processing load required by each request does not vary
significantly.
Least Connections
The Least Connections policy routes incoming non-sticky request traffic to the backend server
with the fewest active connections. This policy helps you maintain an equal distribution of
active connections with backend servers. As with the round robin policy, you can assign a
weight to each backend server and further control traffic distribution.
IP Hash
The IP Hash policy uses an incoming request's source IP address as a hashing key to route
non-sticky traffic to the same backend server. The load balancer routes requests from the
same client to the same backend server as long as that server is available. This policy honors
server weight settings when establishing the initial connection.
IP Hash ensures that requests from a particular client are always directed to the same
backend server, as long as it is available.
Connection Management
After your load balancer connects a client to a backend server, the connection can be closed
due to inactivity. The Load Balancing service honors your backend server keep-alive settings.
Also, you can configure load balancer listeners to control the maximum idle time allowed
during each TCP connection or HTTP request and response pair.
Keep-Alive Settings
For HTTP connections, your load balancer honors backend server keep-alive settings. The load
balancer inspects the Connection: header in backend server responses to determine
connection handling. For example, a backend server has a keep-alive request maximum of
100 and a keep-alive timeout of 60 seconds. The system maintains the connection for 100
transactions or until it has been idle for 60 seconds, whichever limit occurs first.
The keep-alive connection pool can grow, depending on the load balancer and backend server
load level. Your load balancer can close keep-alive connections that are not used for an
extended period. If you set the HTTP keep-alive timeout to a value that is higher than the
listener's timeout value, the listener's setting governs connection timeouts during each
request and response exchange. The keep-alive timeout still applies to idle time between a
completed response and any subsequent request.
Connection Configuration
When you create a TCP or HTTP listener, you can specify the maximum idle time in seconds.
This setting applies to the time allowed between two successive receive or two successive
send operations. If the configured timeout has elapsed with no packets sent or received, the
client's connection is closed. For HTTP and WebSocket connections, a send operation does not
reset the timer for receive operations and a receive operation does not reset the timer for
send operations.
Modify the timeout parameter if either the client or the backend server requires more time to
transmit data. Some examples include:
l The client sends a database query to the backend server and the database takes over
300 seconds to execute. Therefore, the backend server does not transmit any data
within 300 seconds.
l The client uploads data using the HTTP protocol. During the upload, the backend does
not transmit any data to the client for more than 60 seconds.
l The client downloads data using the HTTP protocol. After the initial request, it stops
transmitting data to the backend server for more than 60 seconds.
l The client starts transmitting data after establishing a WebSocket connection, but the
backend server does not transmit data for more than 60 seconds.
l The backend server starts transmitting data after establishing a WebSocket connection,
but the client does not transmit data for more than 60 seconds.
The maximum timeout value is 7200 seconds. Contact My Oracle Support to file a service
request if you want to increase this limit for your tenancy. For more information, see Service
Limits.
X-Forwarded-For
Provides a list of connection IP addresses.
The load balancer appends the last remote peer address to the X-Forwarded-For field from
the incoming request. A comma and space precede the appended address. If the client
request header does not include an X-Forwarded-For field, this value is equal to the X-Real-
IP value. The original requesting client is the first (left-most) IP address in the list, assuming
that the incoming field content is trustworthy. The last address is the last (most recent) peer,
that is, the machine from which the load balancer received the request. The format is:
X-Forwarded-For: <original_client>, <proxy1>, <proxy2>
X-Forwarded-Host
Identifies the original host and port requested by the client in the Host HTTP request header.
This header helps you determine the original host, since the hostname or port of the reverse
proxy (load balancer) might differ from the original server handling the request.
X-Forwarded-Host: www.oracle.com:8080
X-Forwarded-Port
Identifies the listener port number that the client used to connect to the load balancer. For
example:
X-Forwarded-Port: 443
X-Forwarded-Proto
Identifies the protocol that the client used to connect to the load balancer, either http or
https. For example:
X-Forwarded-Proto: https
X-Real-IP
Identifies the client's IP address. For the Load Balancing service, the "client" is the last
remote peer.
Your load balancer intercepts traffic between the client and your server. Your server's access
logs, therefore, include only the load balancer's IP address. The X-Real-IP header provides
the client's IP address. For example:
X-Real-IP: 192.168.0.10
Session Persistence
Session persistence is a method to direct all requests originating from a single logical client to
a single backend web server. Backend servers that use caching to improve performance, or to
enable log-in sessions or shopping carts, can benefit from session persistence.
Your load balancer must operate in HTTP mode to support server side, cookie-driven session
persistence. You can enable the session persistence feature when you create a backend set.
To configure session persistence, you specify a cookie name and decide whether to disable
fallback for unavailable servers. You can edit an existing backend set to change the session
persistence configuration.
Cookies
The Load Balancing service activates session persistence when a backend server sends a Set-
Cookie response header containing a recognized cookie name. The cookie name must match
the name specified in the backend set configuration. If the configuration specifies a match-all
pattern, '*', any cookie set by the server activates session persistence. Unless a backend
server activates session persistence, the service follows the load balancing policy specified
when you created the load balancer.
The client computer must accept cookies for Load Balancing session persistence feature to
work.
The Load Balancing service calculates a hash of the configured cookie and other request
parameters, and sends that value to the client in a cookie. The value stored in the cookie
enables the service to route subsequent client requests to the correct backend server. If your
backend servers change any of the defined cookies, the service recomputes the cookie's value
and resends it to the client.
To stop session persistence, the backend server must delete the session persistence cookie. If
you used the match-all pattern, it must delete all cookies. You can delete cookies by sending a
Set-Cookie response header with a past expiration date. The Load Balancing service routes
subsequent requests using the configured load balancing policy.
Fallback
By default, the Load Balancing service directs traffic from a persistent session client to a
different backend server when the original server is unavailable. You can configure the
backend set to disable this fallback behavior. When you disable fallback, the load balancer
fails the request and returns an HTTP 502 code. The service continues to return an HTTP 502
until the client no longer presents a persistent session cookie.
The Load Balancing service considers a server marked drain available for existing persisted
sessions. New requests that are not part of an existing persisted session are not sent to that
server.
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
Prerequisites
Before you can implement a working load balancer, you need:
l A VCN with at least two public subnets for a public load balancer. Each subnet must
reside in a separate availability domain. For more information on subnets, See VCNs
and Subnets and Public vs. Private Subnets.
For the purposes of access control, you must specify the compartment where you want the
load balancer to reside. Consult an administrator in your organization if you're not sure which
compartment to use. For information about compartments and access control, see Overview
of IAM.
When you create a load balancer within your VCN, you get a public or private IP address, and
provisioned total bandwidth. If you need another IP address, you can create another load
balancer.
A public load balancer requires two subnets to host the active load balancer and a standby.
Each subnet must reside in a separate availability domain and must be publicly accessible. For
more information on VCNs and subnets, see Overview of Networking. You can associate a
public IPv4 address with a DNS name from any vendor. You can use the public IP address as a
front end for incoming traffic. The load balancer can route data traffic to any backend server
that is reachable from the VCN.
A private load balancer requires only one subnet to host the active load balancer and a
standby. The private IP address is local to the subnet. The load balancer is accessible only
from within the VCN that contains the associated subnet, or as further restricted by your
security list rules. The load balancer can route data traffic to any backend server that is
reachable from the VCN.
Optionally, you can associate your listeners with SSL server certificates to manage how your
system handles SSL traffic. See Managing SSL Certificates.
For information about the number of load balancers you can have, see Service Limits.
For a running load balancer, some configuration changes lead to service disruptions. The
following guidelines help you understand the effect of changes to your load balancer.
l Operations that add, remove, or modify a backend server create no disruptions to the
Load Balancing service.
l Operations that edit an existing health check policy create no disruptions to the Load
Balancing service.
l Operations that trigger a load balancer reconfiguration can produce a brief service
disruption with the possibility of some terminated connections.
Health Status
The Load Balancing service provides health status indicators that use your health check
policies to report on the general health of your load balancers and their components. You can
see health status indicators on the Console List and Details pages for load balancers, backend
sets, and backend servers. You also can use the Load Balancing API to retrieve this
information.
For general information about health status indicators, see Editing Health Check Policies.
The Console list of load balancers provides health status summaries that indicate the overall
health of each load balancer. There are four levels of health status indicators. The meaning of
each level is:
l OK: All backend sets associated with the load balancer return a status of OK.
l WARNING: All the following conditions are true:
o At least one backend set associated with the load balancer returns a status of
WARNING or UNKNOWN.
o No backend sets return a status of CRITICAL.
o The load balancer life cycle state is ACTIVE.
l CRITICAL: At least one backend set associated with the load balancer returns a status
of CRITICAL.
l UNKNOWN: Any one of the following conditions is true:
o The load balancer life cycle state is not ACTIVE.
o No backend sets are defined for the load balancer.
o All the following conditions are true:
n More than half of the backend sets associated with the load balancer return
a status of UNKNOWN.
n None of the backend sets return a status of WARNING or CRITICAL.
n The load balancer life cycle state is ACTIVE.
o The system could not retrieve metrics for any reason.
The load balancer Details page provides the same Overall Health status indicator found in
the list of load balancers. It also includes counters for the Backend Set Health status values
reported by the load balancer's child backend sets.
l The number of child entities reporting the indicated health status level.
l If a counter corresponds to the overall health, the badge has a fill color.
l If a counter has a zero value, the badge has a light gray outline and no fill color.
By default, the Console shows a list of VCNs in the compartment you’re currently
working in. Use the click here link to select a VCN in a different compartment.
l Virtual Cloud Network: Required. Specify a VCN for the load balancer.
l Visibility: Specify whether your load balancer is public or private.
o Create Public Load Balancer: Choose this option to create a public load
balancer. You can use the assigned public IP address as a front end for
incoming traffic and to balance that traffic across all backend servers.
o Create Private Load Balancer: Choose this option to create a private
load balancer. You can use the assigned private IP address as a front end
for incoming internal VCN traffic and to balance that traffic across all
backend servers.
l Subnet Compartment: Required, when this option appears. Specify the
compartment from which to select your subnets.
By default, the Console shows a list of subnets in the compartment you’re
currently working in. Use the click here link to select a subnet in a different
compartment.
l Subnet (1 of 2): Required. Select an available subnet. For a public load
balancer, it must be a public subnet.
l Subnet (2 of 2): Required for a public load balancer. Select a second public
subnet. The second subnet must reside in a separate availability domain from the
first subnet.
4. Click Create.
After the system provisions the load balancer, details appear in the load balancer list. To view
more details, click the load balancer name.
After your load balancer is provisioned, you must create at least one backend set and at least
one listener for it.
l CreateLoadBalancer
l DeleteLoadBalancer
l GetLoadBalancer
l GetLoadBalancerHealth
l ListLoadBalancers
l ListLoadBalancerHealths
l UpdateLoadBalancer: You can update the load balancer's display name.
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
Changing the load balancing policy of a backend set temporarily interrupts traffic and can drop
active connections.
For background information on the Oracle Cloud Infrastructure Load Balancing, see Overview
of Load Balancing.
Health Status
The Load Balancing service provides health status indicators that use your health check
policies to report on the general health of your load balancers and their components. You can
see health status indicators on the Console List and Details pages for load balancers, backend
sets, and backend servers. You also can use the Load Balancing API to retrieve this
information.
For general information about health status indicators, see Editing Health Check Policies.
The Console list of a load balancer's backend sets provides health status summaries that
indicate the overall health of each backend set. There are four levels of health status
indicators. The meaning of each level is:
l OK: All backend servers in the backend set return a status of OK.
l WARNING: Both of the following conditions are true:
o Half or more of the backend set's backend servers return a status of OK.
o At least one backend server returns a status of WARNING, CRITICAL, or
UNKNOWN.
l CRITICAL: Fewer than half of the backend set's backend servers return a status of OK.
l UNKNOWN: At least one of the following conditions is true:
o More than half of the backend set's backend servers return a status of UNKNOWN.
o The system could not retrieve metrics for any reason.
o The backend set does not have a listener attached.
The backend set Details page provides the same Overall Health status indicator found in the
load balancer's list of backend sets. It also includes counters for the Backend Health status
values reported by the backend set's child backend servers.
l The number of child entities reporting the indicated health status level.
l If a counter corresponds to the overall health, the badge has a fill color.
l If a counter has a zero value, the badge has a light gray outline and no fill color.
l Use SSL: Optional. Check this box to associate an SSL certificate bundle with the
backend set. The following settings are required to enable SSL handling. See
Managing SSL Certificates for more information.
o Certificate Name: Required. The friendly name of the SSL certificate to
use. See Managing SSL Certificates for more information.
o Verify Peer Certificate: Optional. Select this option to enable peer
certificate verification.
o Verify Depth: Optional. Specify the maximum depth for certificate chain
verification.
l Use Session Persistence: Optional. Check this box to enable persistent
sessions from a single logical client to a single backend web server. The following
settings configure session persistence. See Session Persistence for more
information.
o Cookie Name: The cookie name used to enable session persistence.
Specify '*' to match any cookie name.
o Disable Fallback: Check this box to disable fallback when the original
server is unavailable.
l Health Check: Required. Specify the test parameters to confirm the health of
backend servers.
o Protocol: Required. Specify the protocol to use, either HTTP or TCP.
o Port: Required. Specify the backend server port against which to run the
health check.
o Interval in ms: Optional. Specify how frequently to run the health check,
in milliseconds. The default is 10000 (10 seconds).
After your backend set is provisioned, you must specify backend servers for the set. See
Managing Backend Servers for more information.
When you edit a backed set, you can choose a new load balancing policy and modify the SSL
configuration.
1. Open the Console, click Networking, and then click Load Balancers.
2. Click the name of the Compartment that contains the load balancer you want to
modify, and then click the load balancer's name.
3. In the Resources menu, click Backend Sets (if necessary).
4. Click the name of the backend set you want to edit.
If you want to modify the backend set's health check policy, see Editing Health Check Policies.
If you want to add or remove backend servers from the backend set, see Managing Backend
Servers.
1. Open the Console, click Networking, and then click Load Balancers.
2. Click the name of the Compartment that contains the load balancer you want to
modify, and then click the load balancer's name.
3. In the Resources menu, click Backend Sets (if necessary).
4. For the backend set you want to delete, click the Actions icon ( ), and then click
Delete.
5. Confirm when prompted.
l CreateBackendSet
l DeleteBackendSet
l GetBackendSet
l GetBackendSetHealth
l ListBackendSets
l UpdateBackendSet
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
To route traffic to a backend server, Load Balancing requires the IP address of the compute
instance and the relevant application port. If the backend server resides within the same VCN
as the load balancer, Oracle recommends that you specify the compute instance's private IP
address. If the backend server resides within a different VCN, you must specify the public IP
address of the compute instance. You also must ensure that the VCN's security list rules allow
Internet traffic.
To enable backend traffic, your backend server subnets must have appropriate ingress and
egress rules in their security lists. When you add backend servers to a backend set, the Load
Balancing service Console can suggest rules for you, or you can create your own rules using
the Networking service. To learn more about these rules, see Parts of a Security List Rule.
You can add and remove backend servers without disrupting traffic.
Health Status
The Load Balancing service provides health status indicators that use your health check
policies to report on the general health of your load balancers and their components. You can
see health status indicators on the Console List and Details pages for load balancers, backend
sets, and backend servers. You also can use the Load Balancing API to retrieve this
information.
For general information about health status indicators, see Editing Health Check Policies.
The Console list of a backend set's backend servers provides health status summaries that
indicate the overall health of each backend server. The primary and standby load balancers
both provide health check results that contribute to the health status. There are four levels of
health status indicators. The meaning of each level is:
l OK: The primary and standby load balancer health checks both return a status of OK.
l WARNING: One health check returned a status of OK and one did not.
l CRITICAL: Neither health check returned a status of OK.
l UNKNOWN: One or both health checks returned a status of UNKNOWN or the system
was unable to retrieve metrics.
To view the health status details for a specific backend server, click its IP Address.
The Details page for a backend set provides the same Overall Health status indicator found
in the backend set's list of backend servers. It also reports the following data for the two
health checks performed against each backend server:
IP ADDRESS
The IP address of the health check status report provider, which is a Compute instance
managed by the Load Balancing service. This identifier helps you differentiate same-
subnet (private) load balancers that report health check status.
The Load Balancing service ensures high availability by providing one active and one
standby load balancer. For a public load balancer, each load balancer instance resides in a
different subnet. For a private load balancer, both load balancers reside in the same
subnet. To diagnose a backend server issue, you must know the source of the health
check report. For example, a misconfigured security list might cause a load balancer
instance to report that a backend server is healthy. The other load balancer instance
might return an unhealthy status. In this case, one of the two load balancer instances
cannot communicate with the backend server. Reconfigure the security list to restore the
backend server's health status.
STATUS
l OK
The backend server's response satisfied the health check policy requirements.
l INVALID_STATUS_CODE
The HTTP response status code did not match the expected status code specified by
the health policy.
l TIMED_OUT
The backend server did not respond within the timeout interval specified by the
health policy.
l REGEX_MISMATCH
The backend server response did not satisfy the regular expression specified by the
health policy.
l CONNECT_FAILED
The health check server could not connect to the backend server.
l IO_ERROR
An input or output communication error occurred while reading or writing a
response or request to the backend server.
l OFFLINE
The backend server is set to offline, so health checks are not run.
l UNKNOWN
Health check status is not available.
LAST CHECKED
l Weight: Optional. Specify the weight to apply to this server. For more
information, see How Load Balancing Policies Work.
6. Click Submit. If you did not choose to have the service create security list rules for
you, the specified servers are added.
If you chose to have the service create security list rules for you, continue with the next
step.
7. Review the suggested rules to be added to the security list rules for the indicated
subnets. To add the rules, click Add All Security Rules.
The API enables you to mark a backend server in the following ways:
BACKUP
The load balancer forwards ingress traffic to this backend server only when all other
backend servers not marked as "backup" fail the health check policy. This configuration is
useful for handling disaster recovery scenarios.
DRAIN
The load balancer stops forwarding new TCP connections and new non-sticky HTTP
requests to this backend server so an administrator can take the server out of rotation for
maintenance purposes.
OFFLINE
You also can use the API to specify a server's load balancing policy weight. For more
information on load balancing policies, see How Load Balancing Policies Work.
l CreateBackend
l DeleteBackend
l GetBackend
l GetBackendHealth
l ListBackends
l UpdateBackend
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
To handle TCP, HTTP, and HTTPS traffic, you must configure at least one listener per traffic
type.
When you create a listener, you must ensure that your VCN's security list rules allow the
listener to accept traffic.
You can have one SSL certificate bundle per listener. You can configure two listeners, one
each for ports 443 and 8443, and associate SSL certificate bundles with each listener. For
more information about SSL certificates for load balancers, see Managing SSL Certificates.
To create a listener
1. Open the Console, click Networking, and then click Load Balancers.
2. Choose the Compartment that contains the load balancer you want to modify, and then
click the load balancer's name.
3. In the Resources menu, click Listeners (if necessary), and then click Create
Listener.
4. In the Create Listener dialog box, enter the following:
l Name: Required. Specify a friendly name for the listener. It must be unique, and
it cannot be changed. Avoid entering confidential information.
l Hostname: Optional. Specify a virtual hostname for this listener.
l Protocol: Required. Specify the protocol to use, either HTTP or TCP.
l Port: Required. Specify the port on which to listen for incoming traffic.
l Use SSL: Optional. Check this box to associate an SSL certificate bundle with the
listener. The following settings are required to enable SSL handling. See Managing
SSL Certificates for more information.
o Certificate Name: Required. The friendly name of the SSL certificate to
use.
o Verify Peer Certificate: Optional. Select this option to enable peer
certificate verification.
o Verify Depth: Optional. Specify the maximum depth for certificate chain
verification.
l Backend Set: Required. Specify the default backend set to which the listener
routes traffic.
l Timeout in seconds: Optional. Specify the maximum idle time in seconds. This
setting is the time allowed between two successive receive or two successive
l Path Route Set: Optional. Specify the name of the set of path-based routing
rules that applies to this listener's traffic.
5. Click Create.
When you create a listener, you must also update your VCN's security list rules to allow traffic
to that listener.
To edit a listener
1. Open the Console, click Networking, and then click Load Balancers.
2. Choose the Compartment that contains the load balancer you want to modify, and then
click the load balancer's name.
3. In the Resources menu, click Listeners (if necessary).
4. For the listener you want to edit, click the Actions icon ( ), and then click Edit
Listener.
5. Make the configuration changes you need, and then click Submit.
To delete a listener
1. Open the Console, click Networking, and then click Load Balancers.
2. Choose the Compartment that contains the load balancer you want to modify, and then
click the load balancer's name.
l CreateListener
l DeleteListener
l UpdateListener
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
Virtual Hostnames
You can assign a virtual hostname to any listener you create for your load balancer. Each
hostname can correspond to an application served from your backend. Some advantages of
virtual hostnames include:
l A single associated IP address. Multiple hostnames, backed by DNS entries, can point to
the same load balancer IP address.
l A single load balancer. You do not need a separate load balancer for each application.
l A single load balancer shape. Running multiple applications behind a single load
balancer helps you manage aggregate bandwidth demands and optimize utilization.
l Simpler backend set management. Managing a set of backend servers under a single
resource simplifies network configuration and administration.
You can define an exact virtual hostname, such as "app.example.com", or you can use a
wildcard name. A wildcard name includes an asterisk (*) in place of the first or last part of the
name. When searching for a virtual server name, the service chooses the first matching
variant in the following priority order:
You do not need to specify the matching pattern to apply. It is inherent in the asterisk
position, that is, starting, ending, or none.
Default Listener
Some applications have multiple endpoints or content types, each distinguished by a unique
URI path. For example, /admin, /data, or /video, or /cgi. You can use path route rules to
route traffic to the correct backend set without using multiple listeners or load balancers.
A path route is a string that the Load Balancing service matches against an incoming URI to
determine the appropriate destination backend set.
A path route rule consists of a path route string and a pattern match type.
l Specify the pattern match type for each path route rule. Match types include:
o EXACT_MATCH
o FORCE_LONGEST_PREFIX_MATCH
o PREFIX_MATCH
o SUFFIX_MATCH
l Path route rules apply only to HTTP and HTTPS requests. They have no effect on TCP
requests.
A path route set includes all path route strings and matching rules that define the data routing
for a particular listener.
The system applies the following priorities, based on match type, to the path route rules
within a set:
l For one path route rule that specifies the EXACT_MATCH type, there is no cascade of
priorities. The listener looks for an exact match only.
l For two path route rules, one that specifies the EXACT_MATCH type and one that
specifies any other match type, the exact match rule is evaluated first. If no match is
found, then the system looks for the second match type.
l For multiple path route rules specifying various match types, the system applies the
following cascade:
1. EXACT_MATCH.
2. FORCE_LONGEST_PREFIX_MATCH.
3. PREFIX_MATCH or SUFFIX_MATCH.
l The order of the rules within the path route set does not matter for EXACT_MATCH and
FORCE_LONGEST_PREFIX_MATCH. The system applies its priority cascade no matter
where these match types appear in the path route set.
l If matching cascades down to prefix or suffix matching, the order of the rules within the
path route set DOES matter. The system chooses the first prefix or suffix rule that
matches the incoming URI path.
Virtual hostnames and path route rules route requests to backend sets. Listeners with a virtual
hostname receive priority over the default (no hostname) listener. The following example
shows the results of a simple routing interaction.
The example system includes three listeners and one path route set:
Listener 1
Listener 2
Listener 3
URL Routed to
http://animals.com A
http://animals.com/tame B
http://animals.com/feral C
http://captive.com B
http://captive.com/tame B
http://captive.com/feral C
http://wild.com C
http://wild.com/tame B
http://wild.com/feral C
To apply path route rules to a listener, you first create a path route set that contains the rules.
The path route set becomes a part of the load balancer's configuration. You then specify the
path route set to use when you create or update a listener for the load balancer.
After you create a path route set, it becomes available for use with the associated load
balance. Create or update a listener to apply the path route set.
2. Choose the Compartment that contains the load balancer you want to modify, and then
click the load balancer's name.
3. In the Resources menu, click Path Route Sets (if necessary).
4. Click the name of the path route set you want to update, and then click Edit Path
Route Rules.
5. In the Edit Path Route Rules dialog box, edit the following as needed for each rule
you want to change:
l Match Type: The type of matching to apply to incoming URIs.
l URL String: The path string to match against the incoming URI path, for example
/admin.
l Backend Set Name: The name of the target backend set for requests where the
incoming URI matches the specified path.
6. (Optional) Click Add Line to create another path route rule or click the red box to delete
an existing rule. You can have up to 20 path route rules in a set.
7. Click Save.
l CreateListener
l CreatePathRouteSet
l DeleteListener
l DeletePathRouteSet
l GetPathRouteSet
l ListPathRouteSets
l UpdateListener
l UpdatePathRouteSet
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
Oracle Cloud Infrastructure accepts x.509 type certificates in PEM format only. The following
is an example PEM encoded certificate:
-----BEGIN CERTIFICATE-----
Base64-encoded certificate
-----END CERTIFICATE-----
If you receive your certificates and keys in formats other than PEM, you must convert them
before you can upload them to the system. You can use OpenSSL to convert certificates and
keys to PEM format. The following example commands provide guidance.
openssl x509 -inform DER -in <certificateName>.der -outform PEM -out <certificateName>.pem
openssl rsa -inform DER -in <privateKeyName>.der -outform PEM -out <privateKeyName>.pem
If you have multiple certificates that form a single certification chain, you must include all
relevant certificates in one file before you upload them to the system. The following example
of a certificate chain file includes four certificates:
-----BEGIN CERTIFICATE-----
Base64-encoded certificate1
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Base64-encoded certificate2
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Base64-encoded certificate3
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Base64-encoded certificate4
-----END CERTIFICATE-----
If your private key submission returns an error, the three most common reasons are:
If you receive an error related to the private key, you can use OpenSSL to check its
consistency:
openssl rsa -check -in <private_key>.pem
This command verifies that the key is intact, the passphrase is correct, and the file contains a
valid RSA private key.
If the system does not recognize the encryption technology used for your private key, decrypt
the key. Upload the unencrypted version of the key with your certificate bundle. You can use
OpenSSL to decrypt a private key:
openssl rsa -in <private_key>.pem -out <nocrypt_private_key>.pem
l Terminate SSL at the load balancer. This configuration is frontend SSL. Your load
balancer can accept encrypted traffic from a client. There is no encryption of traffic
between the load balancer and the backend servers.
l Implement SSL between the load balancer and your backend servers. This configuration
is backend SSL. Your load balancer does not accept encrypted traffic from client
servers. Traffic between the load balancer and the backend servers is encrypted.
l Implement end to end SSL. Your load balancer can accept SSL encrypted traffic from
clients and encrypts traffic to the backend servers.
To terminate SSL at the load balancer, you must create a listener at a port such as 443, and
then associate an uploaded certificate bundle with the listener.
To implement SSL between the load balancer and your backend servers, you must associate
an uploaded certificate bundle with the backend set.
To implement end to end SSL, you must associate uploaded certificate bundles with both the
listener and the backend set.
1. Open the Console, click Networking, and then click Load Balancers.
2. Click the name of the Compartment that contains the load balancer you want to
modify, and then click the load balancer's name.
3. Click the load balancer you want to configure.
4. In the Resources menu, click Certificates.
5. For the certificate you want to delete, click the Actions icon ( ), and then click
Delete.
6. Confirm when prompted.
l CreateCertificate
l DeleteCertificate
l ListCertificates
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
You configure your health check policy when you create a backend set. You can configure TCP-
level or HTTP-level health checks for your backend servers.
l TCP-level health checks attempt to make a TCP connection with the backend servers
and validate the response based on the connection status.
l HTTP-level health checks send requests to the backend servers at a specific URI and
validate the response based on the status code or entity data (body) returned.
The service provides application-specific health check capabilities to help you increase
availability and reduce your application maintenance window.
Health Status
The Load Balancing service provides health status indicators that use your health check
policies to report on the general health of your load balancers and their components. You can
see health status indicators on the Console List and Details pages for load balancers, backend
sets, and backend servers. You also can use the Load Balancing API to retrieve this
information.
There are four levels of health status indicators. The general meaning of each level is:
OK (GREEN)
No attention required.
WARNING (YELLOW)
Some reporting entities require attention.
The resource is not functioning at peak efficiency or the resource is incomplete and
requires further work.
CRITICAL (RED)
Some or all reporting entities require immediate attention.
UNKNOWN (GREY)
Health status cannot be determined.
The resource is not responding or is in transition and might resolve to another status over
time.
The precise meaning of each level differs among the following components:
l Load balancers
l Backend sets
l Backend servers
At the highest level, load balancer health reflects the health of its components. The health
status indicators provide information you might need to drill down and investigate an existing
issue. Some common issues that the health status indicators can help you detect and correct
include:
In this case, all the backend servers for one or more of the affected listeners report as
unhealthy. If your investigation finds that the backend servers do not have problems, then
a backend set probably includes a misconfigured health check.
A LISTENER IS MISCONFIGURED.
All the backend server health status indicators report OK, but the load balancer does not
pass traffic on a listener.
If your investigation shows that the listener is not at fault, check the security list
configuration.
Health status indicators help you diagnose two cases of misconfigured security lists:
l All entity health status indicators report OK, but traffic does not flow (as with
misconfigured listeners). If the listener is not at fault, check the security list
configuration.
l All entity health statuses report as unhealthy. You have checked your health check
configuration and your services run properly on your backend servers.
In this case, your security lists might not include the IP range for the source of the
health check requests. You can find the health check source IP on the Details page
for each backend server. You can also use the API to find the IP in the
sourceIpAddress field of the HealthCheckResult object.
Source IP
A backend server might be unhealthy or the health check might be misconfigured. To see
the corresponding error code, check the status field on the backend server's Details page.
You can also use the API to find the error code in the healthCheckStatus field of the
HealthCheckResult object.
You can enter the value '0' to have the health check
use the backend server's traffic port.
l URL Path (URI): (HTTP only) Required. Specify a URL endpoint against which to
run the health check.
l Interval in ms: Required. Specify how frequently to run the health check, in
milliseconds.
l Timeout in ms: Required. Specify the maximum time in milliseconds to wait for
a reply to a health check. A health check is successful only if a reply returns
within this timeout period.
7. Click Save.
UpdateBackendSet
l GetBackendHealth
l GetBackendSetHealth
l GetLoadBalancerHealth
l ListLoadBalancerHealths
For administrators: For a typical policy that gives access to load balancers and their
components, see Let Network Admins Manage Load Balancers.
Also, be aware that a policy statement with inspect load-balancers gives the specified
group the ability to see all information about the load balancers. For more information, see
Details for Load Balancing.
If you're new to policies, see Getting Started with Policies and Common Policies.
ACCEPTED
IN PROGRESS
A work request record exists for the specified request, but there is no associated WORK_
COMPLETED record.
SUCCEEDED
A work request record exists for this request and an associated WORK_COMPLETED record
has the state SUCCEEDED.
FAILED
A work request record exists for this request and an associated WORK_COMPLETED record
has the state FAILED.
1. Open the Console, click Networking, and then click Load Balancers.
2. Click the name of the Compartment that contains the load balancer you want to
review, and then click the load balancer's name.
3. In the Resources menu, click Work Requests. The status of all work requests
appears on the page.
l ListWorkRequests
l GetWorkRequest
Overview of Networking
When you work with Oracle Cloud Infrastructure, one of the first steps is to set up a virtual
cloud network (VCN) for your cloud resources. This topic gives you an overview of Oracle
Cloud Infrastructure Networking components and typical scenarios for using a VCN.
Networking Components
The Networking service uses virtual versions of traditional network components you might
already be familiar with:
SUBNETS
Subdivisions you define in a VCN (for example, 10.0.0.0/24 and 10.0.1.0/24). Subnets
contain virtual network interface cards (VNICs), which attach to instances. Each subnet
exists in a single availability domain and consists of a contiguous range of IP addresses
that do not overlap with other subnets in the VCN. Subnets act as a unit of configuration
within the VCN: All VNICs in a given subnet use the same route table, security lists, and
DHCP options (see the definitions that follow). You can designate a subnet as private when
you create it, which means VNICs in the subnet can't have public IP addresses. See
Internet Access.
VNIC
A virtual network interface card (VNIC), which attaches to an instance and resides in a
subnet to enable a connection to the subnet's VCN. The VNIC determines how the instance
connects with endpoints inside and outside the VCN. Each instance has a primary VNIC
that's created during instance launch and cannot be removed. You can add secondary
VNICs to an existing instance (in the same availability domain as the primary VNIC), and
remove them as you like. For more information, see Virtual Network Interface Cards
(VNICs).
PRIVATE IP
A private IP address and related information for addressing an instance (for example, a
hostname for DNS). Each VNIC has a primary private IP, and you can add and remove
secondary private IPs. For more information, see Private IP Addresses.
PUBLIC IP
A public IP address and related information. You can optionally assign a public IP to your
instances or other resources that have a private IP. Public IPs can be either ephemeral or
reserved. For more information, see Public IP Addresses.
INTERNET GATEWAY
An optional virtual router that you can add to your VCN. It provides a path for network
traffic between your VCN and the internet. For more information, see Internet Access and
also Typical Networking Scenarios.
ROUTE TABLES
Virtual route tables for your VCN. Your VCN comes with a default route table, and you can
add more. These route tables provide mapping for the traffic from subnets via gateways
or specially configured instances to destinations outside the VCN. For more information,
see Route Tables.
SECURITY LISTS
Virtual firewall rules for your VCN. Your VCN comes with a default security list, and you
can add more. These security lists provide ingress and egress rules that specify the types
of traffic allowed in and out of the instances. You can choose whether a given rule is
stateful or stateless. For more information, see Security Lists.
DHCP OPTIONS
Configuration information that is automatically provided to the instances when they boot
up. For more information, see DHCP Options.
For your VCN, Oracle recommends using one of the private IP address ranges specified in RFC
1918 (10.0.0.0/8, 172.16/12, and 192.168/16). However, you can use a publicly routable
range. Regardless, this documentation uses the term private IP address when referring to IP
addresses in your VCN's CIDR.
The VCN's CIDR must not overlap with your on-premises network or another VCN you peer
with. The subnets in a given VCN must not overlap with each other. For reference, here's a
CIDR calculator.
When you create a new subnet, you can associate a route table with it. If you don’t, the
default route table is automatically associated with the subnet. The same is true for security
lists and sets of DHCP options. After you associate a particular route table, security list, or set
of DHCP options with a subnet (whether it’s the default or not), you can’t change that
association. But as mentioned before, you can change the contents of the component.
For more information, see Route Tables, Security Lists, and DHCP Options.
Connectivity Choices
You can set up your VCN to have access to the internet if you like. You can also privately
connect your VCN to your on-premises network and to another VCN.
Internet Access
Instances without public IP addresses or access to an internet gateway cannot access the
internet directly. However, you can configure a subnet to access the internet indirectly by
either:
l Setting up an instance in your VCN to perform Network Address Translation (NAT). For
information about routing subnet traffic to an instance, see Using a Private IP as a Route
Target.
l Connecting your VCN to your on-premises network via a DRG and then routing your
internet traffic to your on-premises network. Your on-premises network must be
configured to route traffic to the internet. For more information, see Connection to Your
On-Premises Network.
When you create a subnet, by default it's considered public, which means instances in that
subnet are allowed to have public IP addresses. Whoever launches the instance chooses
whether it will have a public IP address. You can override that behavior when creating the
subnet and request that it be private, which means instances launched in the subnet are
prohibited from having public IP addresses. Network administrators can therefore ensure that
instances in the subnet have no internet access, even if the VCN has a working internet
gateway, and security lists and firewall rules allow the traffic.
Each instance has a primary VNIC that's created during instance launch and cannot be
removed. You can add secondary VNICs to an existing instance (in the same availability
Every VNIC has a private IP address from the associated subnet's CIDR. You can choose the
particular IP address (during instance launch or secondary VNIC creation), or Oracle can
choose it for you. You can also add secondary private IPs to a VNIC.
If the VNIC is in a public subnet, then each private IP on that VNIC can have a public IP
assigned to it at your discretion. Oracle chooses the particular IP address. There are two
types of public IPs: ephemeral and reserved. An ephemeral public IP exists only for the
lifetime of the private IP it's assigned to. In contrast, a reserved public IP exists as long as
you want it to. You maintain a pool of reserved public IPs and allocate them to your instances
at your discretion. You can move them from resource to resource in a region as you need to.
There are two ways to connect your on-premises network to Oracle Cloud Infrastructure:
l IPSec VPN: Offers multiple IPSec tunnels between your existing network's edge and
your VCN, by way of a DRG that you create and attach to your VCN.
l Oracle Cloud Infrastructure FastConnect: Offers a private connection between your
existing network's edge and Oracle Cloud Infrastructure. Traffic does not traverse the
internet. Both private peering and public peering are supported. That means you can
access private IPv4 addresses in your VCN as well as regional public IPv4 addresses in
Oracle Cloud Infrastructure (for example, Object Storage or public load balancers in
your VCN).
You can use one or both types of the preceding connections. If using both, you can use them
simultaneously, or in a redundant configuration.
You can connect your VCN to another VCN in the same region and tenancy over a private
connection that doesn't require the traffic to traverse the internet. In general, this type of
connection is referred to as local VCN peering. Each VCN must have a local peering gateway,
as well as specific IAM policies, route rules, and security lists that permit the connection to be
made and the desired network traffic to flow over the connection. For more information, see
VCN Peering.
This is the fastest way to try out Networking. The following figure illustrates the scenario. You
set up a VCN with:
You then launch one or more compute instances in one of the subnets. In this scenario, each
instance gets both a public and private IP address. You can then communicate with the
instances via the public IP address over the internet from your on-premises network.
For instructions on using the Console or API to set up a VCN with public subnets, see Scenario
A: Public Subnets.
For additional security, you could modify all the security list
ingress rules to allow traffic only from within your VCN and
your on-premises network.
The following figure illustrates the general layout. To use this scenario, you must have a
network administrator configure the router at your end of the IPSec VPN. You can then launch
an instance in your VCN and communicate with it using its private IP address from your on-
premises network.
You might use this scenario, for example, if you want to extend your private database servers
in your on-premises network into the cloud.
For instructions on using the Console or API to set up a VCN with private subnets and IPSec
VPN, see Scenario B: Private Subnets with a VPN.
o Stateful egress rule for any traffic in the private subnets on TCP port 1521 (for
Oracle databases)
l The default set of DHCP options
Notice that the public subnet would use both the default security list and the public subnet
security list. Likewise, the private subnet would use both the default security list and the
private subnet security list. The default security list contains a core set of stateful rules that
all subnets in the scenario need to use.
The following figure illustrates the general layout. To use this scenario, you must have a
network administrator configure the router at your end of the IPSec VPN.
You might use this scenario to host a cloud-based website that's connected to a database. The
web servers reside in the public subnet and are thus reachable from the internet. The
database servers reside in the private subnet.
For instructions on using the Console or API to set up a VCN with public and private subnets,
see Scenario C: Public and Private Subnets with a VPN.
l 129.213.8.0/21
l 129.213.16.0/20
l 129.213.32.0/19
l 129.213.64.0/18
Frankfurt (FRA) region:
l 130.61.8.0/21
l 130.61.16.0/20
l 130.61.32.0/19
l 130.61.64.0/19
l 129.146.0.0/21
l 129.146.8.0/22
l 129.146.16.0/20
l 129.146.32.0/21
l 129.146.40.0/22
l 129.146.64.0/18
l 129.146.128.0/19
169.254.0.2, 169.254.2.2-169.254.2.254
For iSCSI connections to the boot and block volumes.
169.254.0.3
For uploads relating to kernel updates. See OS Kernel Updates for more information.
169.254.169.254
For DNS (port 53) and Metadata (port 80) services. See Getting Instance Metadata for more
information.
169.254.169.253
For Windows instances to activate with Microsoft Key Management Service (KMS).
Resource Identifiers
Each Oracle Cloud Infrastructure resource has a unique, Oracle-assigned identifier called an
Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify
your resources, see Resource Identifiers.
To access the Console, you must use a supported browser. Oracle Cloud Infrastructure
supports the latest versions of Google Chrome, Microsoft Edge, Internet Explorer 11, Firefox,
and Firefox ESR. Note that private browsing mode is not supported for Firefox, Internet
Explorer, or Edge.
For general information about using the API, see About the API.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
If you're a member of the Administrators group, you already have the required access to
execute Scenario A. Otherwise, you need access to Networking, and you need the ability to
launch instances. See IAM Policies for Networking.
Setting Up Scenario A
Setup is easy in the Console. Alternatively, you can use the Oracle Cloud Infrastructure API,
which lets you execute the individual operations yourself.
Oracle then automatically creates a VCN for you with CIDR block 10.0.0.0/16, an internet
gateway, a route rule to enable traffic to and from the internet gateway, the Default Security
List, the default set of DHCP options, and one public subnet per availability domain. The VCN
will automatically use the Internet and VCN Resolver for DNS.
Your next step is to launch an instance into one of the subnets and then communicate with it
(for example, via SSH or RDP). For more information, see Launching an Instance.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
1. CreateVcn: Make sure to include a DNS label if you want the VCN to use the built-in DNS
capability (see DNS in Your Virtual Cloud Network).
2. CreateSubnet: To match the scenario described above, create one public subnet per
availability domain. Include a DNS label for each subnet if you want the VCN Resolver to
resolve hostnames for instances in the subnet. Use the default route table, default
security list, and default set of DHCP options.
3. CreateInternetGateway
4. UpdateRouteTable: To enable communication via the internet gateway, update the
default route table to include this route rule: 0.0.0.0/0 > internet gateway.
You now have a working cloud network (VCN) with an internet gateway, the Default Security
List, the default set of DHCP options, and at least one public subnet.
Your next step is to launch an instance into a subnet in the VCN and then communicate with it
(for example, via SSH or RDP). For more information, see Launching an Instance.
Prerequisites
To set up the VPN in this scenario, you need to get the following information from a network
administrator:
You will provide Oracle this information and in return receive the information your network
administrator needs in order to configure the on-premises router at your end of the VPN.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
If you're a member of the Administrators group, you already have the required access to
execute Scenario B. Otherwise, you need access to Networking, and you need the ability to
launch instances. See IAM Policies for Networking.
Setting Up Scenario B
Setup is easy in the Console. Alternatively, you can use the Oracle Cloud Infrastructure API,
which lets you execute the individual operations yourself.
g. For the CIDR Block, enter a single, contiguous CIDR block for the cloud network.
For example: 10.0.0.0/16. You cannot change this value later. For reference,
here's a CIDR calculator.
h. If you want the instances in the VCN to have DNS hostnames (which can be used
with the Internet and VCN Resolver, a built-in DNS capability in the VCN), select
the check box for Use DNS Hostnames in this VCN. Then you may specify a
DNS label for the VCN, or the Console will generate one for you. The dialog box
will automatically display the corresponding DNS Domain Name for the VCN
(<VCN DNS label>.oraclevcn.com). For more information, see DNS in Your
Virtual Cloud Network.
i. Tags: Leave as is. You can add tags later if you want. For more information, see
Resource Tags.
j. Click Create Virtual Cloud Network.
The cloud network is then created and listed on the page.
2. Create the subnets in the cloud network:
a. On the Virtual Cloud Networks page, click the cloud network you just created.
b. Click Subnets.
c. Click Create Subnet.
d. Enter the following:
l Name: A friendly name for the first subnet (for example, Subnet1). It
doesn't have to be unique, and it cannot be changed later in the Console
(but you can change it with the API).
l Availability Domain: Choose one of the availability domains.
l CIDR Block: A single, contiguous CIDR block within the VCN's CIDR block.
For example: 10.0.1.0/24. You cannot change this value later. For
reference, here's a CIDR calculator.
l Route Table: Select the default route table.
You could now launch one or more instances into the subnets (see Launching an Instance).
However, you wouldn't be able to communicate with them because there's no gateway
connecting the cloud network to your on-premises network. The next procedure walks you
through creating a VPN connection to enable that communication.
a. Click Networking, click Virtual Cloud Networks, and then click your cloud
network.
b. Click Route Tables, and then click the default route table.
c. Click Edit Route Rules.
d. Click + Another Route Rule.
e. Enter the following:
l Destination CIDR Block: 0.0.0.0/0 (which means that all non-intra-VCN
traffic that is not already covered by other rules in the route table will go to
the target specified in this rule).
l Target Type: Dynamic Routing Gateway.
l Compartment: Leave as is.
l Target: The dynamic routing gateway you created earlier.
f. Click Save.
The cloud network's default route table now directs outbound traffic to the
dynamic routing gateway and ultimately to your on-premises network.
5. Create an IPSec Connection:
a. Click Networking, and then click Dynamic Routing Gateways.
b. Click the dynamic routing gateway you created earlier.
c. Click Create IPSec Connection.
d. Enter the following:
l Create in Compartment: Leave the default value (the compartment
you're currently working in).
l Name: Enter a friendly name for the IPSec connection. It doesn't have to
be unique, and it cannot be changed later in the Console (but you can
change it with the API).
l Customer-Premises Equipment: Select the customer-premises
equipment object you created earlier.
l Static Route CIDR: The CIDR block for a static route (see Prerequisites).
If you need to add another, click Add Static Route.
e. Click Create IPSec Connection.
The IPSec connection will be in the "Provisioning" state for a short period.
f. Click the Actions icon ( ), and then click Tunnel Information.
The configuration information for each tunnel is displayed (the IP address of the
VPN headend and the shared secret). Also, the tunnel's status is displayed (either
"Available" or "Down").
g. Copy the information for each of the tunnels into an email or other location so you
can deliver it to the network administrator who will configure the on-premises
router.
For more information, see Configuring Your On-Premises Router for an IPSec
VPN. You can view this tunnel information here in the Console at any time.
h. Click Close.
You have now created all the components required for the VPN. But your network
administrator must configure the on-premises router before network traffic can flow between
your on-premises network and cloud network.
1. Make sure you have the tunnel configuration information that Oracle provided during
VPN setup. See To add a VPN to your cloud network.
2. Configure your on-premises router according to the information in Configuring Your On-
Premises Router for an IPSec VPN.
If there are already instances in one of the subnets, you can confirm the IPSec connection is
up and running by pinging the instances from your on-premises network.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
1. CreateVcn: Make sure to include a DNS label if you want the VCN to use the VCN
Resolver (see DNS in Your Virtual Cloud Network).
2. CreateSubnet: Call it twice, once for each subnet in the scenario. Set each subnet to be
private (that is, prohibit public IP addresses on the VNICs in the subnet). Include a DNS
label for each subnet if you want the VCN Resolver to resolve hostnames for VNICs in
the subnet. Use the default route table, default security list, and default set of DHCP
options.
3. CreateDrg: This creates a new dynamic routing gateway (DRG)
4. CreateDrgAttachment: This attaches the DRG to the VCN.
5. CreateCpe: Here you'll provide the IP address of the on-premises router at your end of
the VPN (see Prerequisites).
6. CreateIPSecConnection: Here you'll provide the static routes for your on-premises
network (see Prerequisites). In return, you'll receive the configuration information your
network administrator needs in order to configure your on-premises router. If you need
that information later, you can get it with GetIPSecConnectionDeviceConfig. For more
information about the configuration, see Configuring Your On-Premises Router for an
IPSec VPN.
7. UpdateRouteTable: To enable communication via the VPN, update the default route
table to include this route: 0.0.0.0/0 > DRG you created earlier.
8. First call GetSecurityList to get the default security list, and then call UpdateSecurityList
to add these additional rules to that list (be aware that UpdateSecurityList overwrites
the entire set of rules):
l Stateful ingress: Source CIDR=0.0.0.0/0, protocol=TCP, source port = all,
destination port=80 (for HTTP).
9. LaunchInstance: Launch at least one instance in each subnet. For more information, see
Launching an Instance.
Prerequisites
To set up the VPN in this scenario, you need to get the following information from a network
administrator:
You will provide Oracle this information and in return receive the information your network
administrator needs in order to configure the on-premises router at your end of the VPN.
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
If you're a member of the Administrators group, you already have the required access to
execute Scenario C. Otherwise, you need access to Networking, and you need the ability to
launch instances. See IAM Policies for Networking.
Setting Up Scenario C
Setup is easy in the Console. Alternatively, you can use the Oracle Cloud Infrastructure API,
which lets you execute the individual operations yourself.
l Stateful ingress rule with source CIDR=CIDR for your private subnet #2,
protocol=TCP, source port = all, destination port=1521 (for Oracle
databases).
e. Add the following egress rules:
l Stateful egress rule with destination CIDR=CIDR for private subnet #1,
protocol=TCP, source port = all, destination port=1521 (for Oracle
databases).
l Stateful egress rule with destination CIDR=CIDR for private subnet #2,
protocol=TCP, source port = all, destination port=1521 (for Oracle
databases).
f. Click Create Security List.
The private subnet security list is then created and listed on the page.
7. Create the subnets in the cloud network:
a. Return to the page that shows your cloud network's details, and click Subnets.
b. Click Create Subnet.
c. Enter the following:
l Name: A friendly name for the first subnet (for example, Public Subnet 1).
It doesn't have to be unique, and it cannot be changed later in the Console
(but you can change it with the API).
l Availability Domain: Choose one of the availability domains.
l CIDR Block: A single, contiguous CIDR block within the VCN's CIDR block.
For example: 10.0.1.0/24. You cannot change this value later. For
reference, here's a CIDR calculator.
l Route Table: Select the Public Subnet Route Table you created earlier.
l Private orpublic subnet: Select Public Subnet, which means VNICs in
the subnet are allowed to have public IP addresses. For more information,
see Internet Access.
The dynamic routing gateway will be in the "Provisioning" state for a short period. Make
sure it is done being provisioned before continuing.
3. Attach the dynamic routing gateway to your cloud network:
a. Click the dynamic routing gateway that you just created.
Its details are displayed. You initially see the tab showing the IPSec connections
associated with the dynamic routing gateway. Instead, you want to view the tab
that shows the cloud network associated with this dynamic routing gateway.
b. Click Virtual Cloud Networks.
c. Click Attach to Virtual Cloud Network.
d. Select the cloud network you want to attach the dynamic routing gateway to and
click Attach to Virtual Cloud Network.
The attachment will be in the "Attaching" state for a short period before it's ready.
4. Update the private subnet's route table:
a. Click Networking, click Virtual Cloud Networks, and then click your cloud
network.
Its details are displayed.
b. Click Route Tables, and then click the Private Subnet Route Table you created
earlier.
Its details are displayed.
c. Click Edit Route Rules.
d. For the existing rule, change the Target Type to Dynamic Routing Gateway, and
for the Target, select the dynamic routing gateway you created earlier.:
e. Click Save.
The rule is updated to direct the traffic from the private subnet in the cloud
network to the dynamic routing gateway and ultimately to your on-premises
network.
5. Create an IPSec Connection:
You have now created all the components required for the VPN. But your network
administrator must configure the on-premises router before network traffic can flow between
your on-premises network and cloud network.
1. Make sure you have the tunnel configuration information that Oracle provided during
VPN setup. See To add a VPN to your cloud network.
2. Configure your on-premises router according to the information in Configuring Your On-
Premises Router for an IPSec VPN.
If there already instances in one of the subnets, you can confirm the IPSec connection is up
and running by pinging the instances from your on-premises network.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
1. CreateVcn: Make sure to include a DNS label if you want the VCN to use the VCN
Resolver (see DNS in Your Virtual Cloud Network).
2. CreateInternetGateway
3. CreateRouteTable: Call it twice, once to create the Public Subnet Route Table and once
to create the Private Subnet Route Table. To enable communication via the internet
gateway, include this route: 0.0.0.0/0 > internet gateway.
4. First call GetSecurityList to get the default security list, and then call
UpdateSecurityList:
l Change the existing stateful ingress rules to use your on-premises network's
CIDR as the source CIDR, instead of 0.0.0.0/0.
l If you plan to launch Windows instances, add this stateful ingress rule: Source
CIDR = your on-premises network on TCP, source port = all, destination port =
3389 (for RDP).
5. CreateSecurityList: Call it to create the Public Subnet Security List with these rules:
l Stateful ingress: Source 0.0.0.0/0 on TCP, source port = all, destination port = 80
(HTTP)
l Stateful ingress: Source 0.0.0.0/0 on TCP, source port = all, destination port =
443 (HTTPS)
l Stateful egress: Destination CIDR blocks of private subnets on TCP, source port =
all, destination port = 1521 (for Oracle databases)
6. CreateSecurityList: Call it again to create the Private Subnet Security List with these
rules:
l Stateful ingress: Source CIDR blocks of public subnets on TCP, source port = all,
destination port = 1521 (for Oracle databases)
l Stateful ingress: Source CIDR blocks of private subnets on TCP, source port = all,
destination port = 1521 (for Oracle databases)
l Stateful egress: Destination CIDR blocks of private subnets on TCP, source port =
all, destination port = 1521 (for Oracle databases)
7. CreateSubnet: Call it four times, once each for Public Subnet 1 and Private Subnet 1 in
the first availability domain, and then once each for Public Subnet 2 and Private Subnet
2 in a second availability domain. For the two private subnets, set the flag to prohibit
public IP addresses on the VNICs in the subnet. Include a DNS label for each subnet if
you want the VCN Resolver to resolve hostnames for VNICs in the subnet. For the public
subnets, make sure to specify both the default security list and the Public Subnet
Security List that you created earlier. Likewise, for the private subnets, make sure to
specify both the default security list and the Private Subnet Security List that you
created earlier. Use the default set of DHCP options.
8. CreateDrg: This creates a new dynamic routing gateway (DRG).
9. CreateDrgAttachment: This attaches the DRG to the VCN.
10. CreateCpe: Here you'll provide the IP address of the router at your end of the VPN (see
Prerequisites).
11. CreateIPSecConnection: Here you'll provide the static routes for your on-premises
network (see Prerequisites). In return, you'll receive the configuration information your
network administrator needs in order to configure your router. If you need that
information later, you can get it with GetIPSecConnectionDeviceConfig. For more
information about the configuration, see Configuring Your On-Premises Router for an
IPSec VPN.
12. UpdateRouteTable: To enable communication via the VPN, update the Private Subnet
Route Table to include this route: 0.0.0.0/0 > dynamic routing gateway.
13. LaunchInstance: Launch at least one instance in each subnet. By default, the instances
in the public subnets will be assigned public IP addresses. For more information, see
Launching an Instance.
You can now communicate from your on-premises network with the instances in the public
subnets over the internet gateway.
You can privately connect a VCN to another VCN in the same region so that the traffic does not
traverse the internet. The CIDRs for the two VCNs must not overlap. For more information,
see VCN Peering.
Each subnet in a VCN exists in a single availability domain and consists of a contiguous range
of IP addresses that do not overlap with other subnets in the cloud network. Example:
172.16.1.0/24. The first two IP addresses and the last in the subnet's CIDR are reserved by
the Networking service. You can't change the size of the subnet after creation, so it's
important to think about the size of subnets you need before creating them. Also, the subnet
acts as a unit of configuration: all instances in a given subnet use the same route table,
security lists, and DHCP options.
Subnets can be either public or private (see Public vs. Private Subnets). You choose this
during subnet creation, and you can't change it later.
You can think of each Compute instance as residing in a subnet. But to be precise, each
instance is actually attached to a virtual network interface card (VNIC), which in turn resides
in the subnet and enables a network connection for that instance.
For the purposes of access control, you must specify the compartment where you want the
cloud network and subnets to reside. Consult an administrator in your organization if you're
not sure which compartment to use. For more information, see Access Control.
You may optionally assign friendly names to the cloud network and its subnets. The names
don't have to be unique, and you can change them later. Oracle will automatically assign each
resource a unique identifier called an Oracle Cloud ID (OCID). For more information, see
Resource Identifiers.
You can also add a DNS label for the VCN and each subnet, which are required if you want the
instances to use the Internet and VCN Resolver feature for DNS in the VCN. For more
information, see DNS in Your Virtual Cloud Network.
You may optionally specify a route table for each subnet. If you don't, the cloud network's
default route table is associated with the subnet. After creating the subnet, you can't change
which route table is associated with it, but you can change the route rules in the table. For
more information about route tables, see Route Tables.
You may optionally specify one or more security lists for the subnet (up to five). If you don't
specify any, the cloud network's default security list is associated with the subnet. After
creating the subnet, you can't change which security lists are associated with it, but you can
change the rules in the lists. Remember that the security list rules are enforced at the
instance level, even though the list is associated at the subnet level. For more information,
see Security Lists.
Similarly, you may also optionally associate a set of DHCP options with the subnet during
creation. All instances in the subnet will receive the configuration specified in that set of DHCP
options. If you don't specify a set, the cloud network's set of default DHCP options is
associated with the subnet. After creating the subnet, you can't change which set of DHCP
options are associated with it, but you can change the values for the options. For more
information, see DHCP Options.
To delete a subnet, it must contain no instances, load balancers, or DB systems. For more
details, see Subnet Deletion.
To delete a cloud network, its subnets must be empty (contain no instances, load balancers,
or DB systems). Also, the cloud network must have no attached gateways. If you're using the
Console, there's a "Delete All" process you can use after first ensuring the subnets are empty.
See To delete a cloud network.
For information about the number of cloud networks and subnets you can have, see Service
Limits.
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
The cloud network is then created and displayed on the Virtual Cloud Networks page in the
compartment you chose. Next you'll typically want to create one or more subnets in the cloud
network.
To create a subnet
1. Confirm you're viewing the compartment that contains the cloud network that you want
to add the subnet to. If you've just created the cloud network, you should still be
viewing the same compartment. If you click Networking and then click Virtual Cloud
Networks, you should see the cloud network. For information about compartments and
access control, see Access Control.
2. On the Virtual Cloud Networks page, click the cloud network you're interested in.
3. Click Create Subnet.
4. In the Create Subnet dialog box, you specify the resources to associate with the
subnet (for example, a route table, and so on). By default, the subnet will be created in
the current compartment, and you'll choose the resources from the same compartment.
Click the click here link in the dialog box if you want to enable compartment selection
for the subnet and each of those resources.
Enter the following:
l Create in Compartment: If you've enabled compartment selection, specify the
compartment where you want to put the subnet.
l Name: A friendly name for the subnet. It doesn't have to be unique, and it cannot
be changed later in the Console (but you can change it with the API).
l Availability Domain: The availability domain for the subnet. Any instances you
later launch into this subnet will also go into this availability domain.
l CIDR Block: A single, contiguous CIDR block for the subnet (for example,
172.16.0.0/24). Make sure it's within the cloud network's CIDR block and doesn't
overlap with any other subnets. You cannot change this value later. See Allowed
VCN Size and Address Ranges. For reference, here's a CIDR calculator.
l Route Table: The route table to associate with the subnet. If you've enabled
compartment selection, under Route Table Compartment, you must specify
the compartment that contains the route table.
l Private or public subnet: Whether VNICs in the subnet can have public IP
addresses. For more information, see Internet Access.
l Use DNS Hostnames in this Subnet: This option is available only if you
provided a DNS label for the VCN during creation. If you want this subnet's
instances to have DNS hostnames (which can be used by the Internet and VCN
Resolver for DNS), select the check box for Use DNS Hostnames in this
Subnet. Then you may specify a DNS label for the subnet, or the Console will
generate one for you. The dialog box will automatically display the corresponding
DNS Domain Name for the subnet (<subnet DNS label>.<VCN DNS
label>.oraclevcn.com). For more information, see DNS in Your Virtual Cloud
Network.
l DHCP Options: The set of DHCP options to associate with the subnet. If you've
enabled compartment selection, under DHCP Options Compartment, you must
specify the compartment that contains the set of DHCP options.
l Security Lists: One or more security lists to associate with the subnet. If you've
enabled compartment selection, you must specify the compartment that contains
the security list.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
5. Click Create.
The subnet is then created and displayed on the Subnets page in the compartment you
chose.
To delete a subnet
Prerequisite: The subnet must have no instances, load balancers, or DB systems in it. For
more information, see Subnet Deletion.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of the cloud networks in the compartment you're viewing is displayed. If you don’t
see the one you're looking for, make sure you’re viewing the correct compartment
(select from the list on the left side of the page).
2. Click Subnets.
3. For the subnet you want to delete, click Terminate.
4. Confirm when prompted.
If the subnet is empty, its state changes to TERMINATING briefly and then TERMINATED. If
the subnet is not empty, you get an error indicating that there are still instances or load
balancers that you must delete first.
Before using the "Delete All" process, make sure there are
no instances, load balancers, or DB systems in any of the
subnets. For more information, see Subnet Deletion.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of the cloud networks in the compartment you're viewing is displayed. If you don’t
see the one you're looking for, make sure you’re viewing the correct compartment
(select from the list on the left side of the page).
2. For the cloud network you want to delete, click the Actions icon ( ), and then click
Terminate.
The resulting dialog box displays a list of the resources to be deleted. The list doesn't
include the default components that come with the VCN, but they will be deleted along
with the VCN.
3. Click Delete All.
The process begins. The progress is displayed as each resource is deleted.
4. When the process is complete, click Close.
3. For the subnet you're interested in, click the Actions icon ( ), and then click View
Tags. From there you can view the existing tags, edit them, and apply new ones.
l ListVcns
l CreateVcn
l GetVcn
l UpdateVcn
l DeleteVcn: Deletes only the VCN and not its related resources. Note that the Console
offers a "Delete All" process that makes it easy to delete the VCN and its related
resources. See To delete a cloud network.
l ListSubnets
l CreateSubnet
l GetSubnet
l UpdateSubnet: You can update only the subnet's name and tags.
l DeleteSubnet
l Public vs. private subnets: You can designate a subnet to be private, which means
instances in the subnet cannot have public IP addresses. For more information, see
Public vs. Private Subnets.
l Security lists: To control packet-level traffic in/out of an instance. You configure
security lists in the Oracle Cloud Infrastructure API or Console. For more information
about security lists, see Security Lists.
l Firewall rules: To control packet-level traffic in/out of an instance. You configure
firewall rules directly on the instance itself. Notice that Oracle Cloud Infrastructure
images that run Oracle Linux automatically include default rules that allow ingress on
TCP port 22 for SSH traffic. Also, the Windows images include default rules that allow
ingress on TCP port 3389 for Remote Desktop access. For more information, see Oracle-
Provided Images.
l Gateways and route tables: To control general traffic flow from your cloud network
to outside destinations (the internet, your on-premises network, or another VCN). You
configure your cloud network's gateways and route tables in the Oracle Cloud
Infrastructure API or Console. For more information about the gateways, see
Networking Components. For more information about route tables, see Route Tables.
l IAM policies: To control who has access to the Oracle Cloud Infrastructure API or
Console itself. You can control the type of access, and which cloud resources can be
accessed. For example, you can control who can set up your network and subnets, or
who can update route tables or security lists. You configure policies in the Oracle Cloud
Infrastructure API or Console. For more information, see Access Control.
Access Control
This topic gives basic information about using compartments and IAM policies to control
access to your cloud network.
Anytime you create a cloud resource such as a virtual cloud network (VCN) or compute
instance, you must specify which IAM compartment you want the resource in. A compartment
is a collection of related resources that can be accessed only by certain groups that have been
given permission by an administrator in your organization. The administrator will create
compartments and corresponding IAM policies to control which users in your organization
have access to which compartments. Ultimately, the goal is to ensure that each person has
access to only the resources they need.
If your company is starting to try out Oracle Cloud Infrastructure, only a few people need to
create and manage the VCN and its components, launch instances into the VCN, and attach
block storage volumes to those instances. Those few people need access to all these
resources, so all those resources could be in the same compartment.
In an enterprise production environment with a VCN, your company will want to use multiple
compartments to make it easier to control access to certain types of resources. For example,
your administrator could create Compartment_A for your VCN and other networking
components. The administrator could then create Compartment_B for all the compute
instances and block storage volumes that the HR organization uses, and Compartment_C for
all the instances and block storage volumes that the Marketing organization uses. The
administrator would then create IAM policies that give users only the level of access they
need in each compartment. For example, the HR instance administrator is not entitled to
modify the existing cloud network. So they would have full permissions for Compartment_B,
but limited access to Compartment_A (just what's required to launch instances into the
network). If they tried to modify other resources in Compartment_A, the request would be
denied.
For more information about compartments and how to control access to your cloud resources,
see "Setting Up Your Tenancy" in the Oracle Cloud Infrastructure Getting Started Guide and
Overview of IAM.
The simplest approach to granting access to Networking is the policy listed in Let Network
Admins Manage a Cloud Network. It covers the cloud network and all the other Networking
components (subnets, security lists, route tables, gateways, and so on). To also give network
admins the ability to launch instances (to test network connectivity), see Let Users Launch
Instances.
If you're new to policies, see Getting Started with Policies and Common Policies.
For reference material for writing more detailed policies for Networking, see Details for the
Core Services.
If you'd like, you can write policies that focus on individual resource-types (for example, just
security lists) instead of the broader virtual-network-family. Be aware that the instance-
family resource-type also includes several permissions for VNICs, which reside in a subnet
but attach to an instance. For more information, see For instance-family Resource Types and
Virtual Network Interface Cards (VNICs).
If you'd like, you can write policies that limit the level of access by using a different policy
verb ( manage vs. use, and so on). If you do, there are some nuances to understand about the
policy verbs for Networking.
Be aware that the inspect verb not only returns general information about the cloud
network's components (for example, the name and OCID of a security list, or of a route
table). It also includes the contents of the component (for example, the actual rules in the
security list, the routes in the route table, and so on).
Also, the following types of abilities are available only with the manage verb, not the use verb:
Security Lists
In addition to using IAM policies to control who can manipulate your VCN (for example, add an
internet gateway, change route table rules), you can use security lists to control traffic at the
packet level.
A security list provides a virtual firewall for an instance, with ingress and egress rules that
specify the types of traffic allowed in and out. Each security list is enforced at the instance
level. However, you configure your security lists at the subnet level, which means that all
instances in a given subnet are subject to the same set of rules. The security lists apply to a
given instance whether it's talking with another instance in the VCN or a host outside the VCN.
Each subnet can have multiple security lists associated with it (for the maximum number, see
Service Limits). A packet in question is allowed if any rule in any of the lists allows the traffic
(or if the traffic is part of an existing connection being tracked). There's a caveat if the lists
happen to contain both stateful and stateless rules that cover the same traffic. For more
information, see Stateful vs. Stateless Rules.
Security lists are regional entities. For the limit on the number of security lists you can have
in a VCN, see Service Limits.
Note: Security lists are not enforced for traffic involving the
169.254.0.0/16 CIDR block, which includes services such as
iSCSI and instance metadata.
When you create a security list rule, you choose whether it's stateful or stateless. The
difference is described below. The default is stateful. Stateless rules are recommended if you
have a high-volume internet-facing website (for the HTTP/HTTPS traffic).
Stateful: If you add a stateful rule to a security list, that indicates that you want to use
connection tracking for any traffic that matches that rule (for instances in the subnet the
security list is associated with). This means that when an instance receives traffic matching
the stateful ingress rule, the response is tracked and automatically allowed back to the
originating host, regardless of any egress rules applicable to the instance. And when an
instance sends traffic that matches a stateful egress rule, the incoming response is
automatically allowed, regardless of any ingress rules. For more details, see Connection
Tracking Details for Stateful Rules.
The figure below illustrates a stateful ingress rule for an instance that needs to receive and
respond to HTTP traffic. Instance A and Host B are communicating (Host B could be any host,
whether an instance or not). The stateful ingress rule allows traffic from any source IP
address (0.0.0.0/0) to destination port 80 only (TCP protocol). No egress rule is required to
allow the response traffic.
Stateless: If you add a stateless rule to a security list, that indicates that you do NOT want to
use connection tracking for any traffic that matches that rule (for instances in the subnet the
security list is associated with). This means that response traffic is not automatically allowed.
To allow the response traffic for a stateless ingress rule, you must create a corresponding
stateless egress rule.
The next figure shows Instance A and Host B as before, but now with stateless security list
rules. As with the stateful rule above, the stateless ingress rule allows traffic from all IP
addresses and any ports, on destination port 80 only (using the TCP protocol). To allow the
response traffic, there needs to be a corresponding stateless egress rule that allows traffic to
any destination IP address (0.0.0.0/0) and any ports, from source port 80 only (using the TCP
protocol).
If Instance A needs instead to initiate HTTP traffic and get the response, then a different set of
stateless rules are required. As the next figure shows, the egress rule would have source port
= all and destination port = 80 (HTTP). The ingress rule would then have source port 80 and
destination port = all.
If you were to use port binding on Instance A to specify exactly which port the HTTP traffic
would come from, you could specify that as the source port in the egress rule and the
destination port in the ingress rule.
Each cloud network has a default security list. A given subnet automatically has the default
security list associated with it if you don't specify one or more other security lists during
subnet creation. After you create a subnet, you can't change which security lists are
associated with it. However, you can change the rules in the lists.
Unlike other security lists, the default security list comes with an initial set of stateful rules,
which you can change:
l Stateful ingress: Allow TCP traffic on destination port 22 (SSH) from source 0.0.0.0/0
and any source port. This rule makes it easy for you to create a new cloud network and
public subnet, launch a Linux instance, and then immediately connect via SSH to that
instance without needing to write any security list rules yourself.
l Stateful ingress: Allow ICMP traffic type 3 code 4 from source 0.0.0.0/0 and any
source port. This rule enables your instances to receive Path MTU Discovery
fragmentation messages.
l Stateful ingress: Allow ICMP traffic type 3 (all codes) from source = your VCN's CIDR
and any source port. This rule makes it easy for your instances to receive connectivity
error messages from other instances within the VCN.
l Stateful egress: Allow all traffic. This allows instances to initiate traffic of any kind to
any destination. Notice that this means the instances can talk to any internet IP address
if the cloud network has an internet gateway. And because stateful security rules use
connection tracking, the response traffic is automatically allowed regardless of any
ingress rules. For more information, see Connection Tracking Details for Stateful Rules.
The default security list comes with no stateless rules. However, you can add or remove rules
from the default security list as you like.
Each security list can contain one or more rules, and each rule allows either ingress or
egress traffic. You choose whether the rule is stateful or stateless. For examples of rules, see
Stateful vs. Stateless Rules, and see Typical Networking Scenarios. For the limit on the
number of rules you can have in a security list, see Service Limits.
l Stateful or stateless: If stateful, connection tracking is used for traffic matching the
rule. If stateless, no connection tracking is used. See Stateful vs. Stateless Rules.
l Protocol: Either a single IPv4 protocol or "all" to cover all protocols.
l Source CIDR: A CIDR block where the traffic originates from. Use 0.0.0.0/0 to indicate
all IP addresses. The prefix is required (for example, include the /32 if specifying an
individual IP address).
l Source port: The port where the traffic originates from. For TCP or UDP, you can
specify all source ports, or optionally specify a single source port number, or a range.
l Destination port: The port where the traffic is destined to. For TCP or UDP, you can
specify all destination ports, or optionally specify a single destination port number, or a
range.
l ICMP type and code: For ICMP, you can specify all types and codes, or optionally
specify a single type with an optional code. If the type has multiple codes, create a
separate rule for each code you want to allow.
l Stateful or stateless: Whether connection tracking is used for the traffic matching the
rule. See Stateful vs. Stateless Rules.
l Protocol: Either a single IPv4 protocol or "all" to cover all protocols.
l Destination CIDR: A CIDR block where the traffic is destined to. Use 0.0.0.0/0 to
indicate all IP addresses. The prefix is required (for example, include the /32 if
specifying an individual IP address).
l Source port: The port where the traffic originates from. For TCP or UDP, you can
specify all source ports, or optionally specify a single source port number, or a range.
l Destination port: The port where the traffic is destined to. For TCP or UDP, you can
specify all destination ports, or optionally specify a single destination port number, or a
range.
l ICMP type and code: For ICMP, you can specify all types and codes, or optionally
specify a single type with an optional code. If the type has multiple codes, create a
separate rule for each code you want to allow.
For instructions on working with security lists and rules, see the sections that follow.
Oracle uses connection tracking to allow responses for traffic that matches stateful rules (see
Stateful vs. Stateless Rules).
To determine response traffic for TCP, UDP, and ICMP, Oracle performs connection tracking
on these items for the packet:
l Protocol
l Source and destination IP addresses
l Source and destination ports (for TCP and UDP only)
To use Oracle Cloud Infrastructure, you must be given the required type of access in a policy
written by an administrator, whether you're using the Console or the REST API with an SDK,
CLI, or other tool. If you try to perform an action and get a message that you don’t have
permission or are unauthorized, confirm with your administrator the type of access you've
been granted and which compartment you should work in.
For administrators: The policy in Let Network Admins Manage a Cloud Network covers
management of all Networking components, including security lists.
If you have security admins who need to manage security lists but not other components in
Networking, you could write a more restrictive policy:
Allow group SecListAdmins to manage security-lists in tenancy
Both statements are needed because the creation of a security list affects the VCN the
security list is in. The second statement also allows the SecListAdmins group to create new
VCNs, but not create subnets or manage any other components related to any of those VCNs,
except for the security lists. The group also can't delete any existing VCNs that already have
subnets in them.
When you create a new subnet, you must associate at least one security list with it. It can be
either the cloud network's default security list or one or more other security lists that you've
created (for the maximum number, see Service Limits). After creating the subnet, you can't
change which security lists are associated with it, so make sure to create your desired
security list before creating the subnet. However, remember that you can change a security
list's rules at any time.
You may optionally assign a friendly name to the security list during creation. It doesn't have
to be unique, and you can change it later. Oracle will automatically assign the security list a
unique identifier called an Oracle Cloud ID (OCID). For more information, see Resource
Identifiers.
For the purposes of access control, you must specify the compartment where you want the
security list to reside. Consult an administrator in your organization if you're not sure which
compartment to use. For more information, see Access Control.
You can add and remove rules from the security list, but notice that when you update a
security list in the API, the new set of rules replaces the entire existing set of rules.
To delete a security list, it must not be associated with a subnet yet. You can't delete a cloud
network's default security list.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of the cloud networks in the compartment you're viewing is displayed. If you don’t
see the one you're looking for, make sure you’re viewing the correct compartment
(select from the list on the left side of the page).
2. Click the cloud network you're interested in.
a. Choose whether it's a stateful or stateless rule (see Stateful vs. Stateless Rules).
By default, rules are stateful unless you specify otherwise.
b. Enter either the source CIDR (for ingress) or destination CIDR (for egress). For
example, use 0.0.0.0/0 to indicate all IP addresses. Other typical CIDRs you
might specify in a rule are the CIDR block for your on-premises network, or for a
particular subnet.
c. Select the protocol (for example, TCP, UDP, ICMP, "All protocols", and so on).
d. Enter further details depending on the protocol:
l If you chose TCP or UDP, enter a source port range and destination port
range. You can enter "All" to cover all ports. If you want to allow a specific
port, enter the port number (for example, 22 for SSH or 3389 for RDP) or a
port range (for example, 20-22).
l If you chose ICMP, you can enter "All" to cover all types and codes. If you
want to allow a specific ICMP type, enter the type and an optional code
separated by a comma (for example, 3,4). If the type has multiple codes
you want to allow, create a separate rule for each code.
7. Click + Add Rule to add additional rules. Make sure to delete any partially completed
rules by clicking the X next to the rule.
8. Tags: Optionally, you can apply tags. If you have permissions to create a resource, you
also have permissions to apply free-form tags to that resource. To apply a defined tag,
you must have permissions to use the tag namespace. For more information about
tagging, see Resource Tags. If you are not sure if you should apply tags, skip this option
(you can apply tags later) or ask your administrator.
9. When you're done, click Create Security List.
The security list is created and then displayed on the Security Lists page in the compartment
you chose. You can now specify this security list when creating a new subnet. Notice that any
stateless rules in the list are shown above any stateful rules. Stateless rules in the list take
precedence over stateful rules. In other words: If there's traffic that matches both a stateless
rule and a stateful rule across all the security lists associated with the subnet, the stateless
rule takes precedence and the connection is not tracked.
For information about using the API and signing requests, see About the API and Security
Credentials. For information about SDKs, see SDKs and Other Tools.
l ListSecurityLists
l GetSecurityList
l UpdateSecurityList
l CreateSecurityList
l DeleteSecurityList
About VNICs
A VNIC enables an instance to connect to a VCN and determines how the instance connects
with endpoints inside and outside the VCN. Each VNIC resides in a subnet in a VCN and
includes these items:
l One primary private IPv4 address from the subnet the VNIC is in, chosen by either you
or Oracle.
l Up to 31 optional secondary private IPv4 addresses from the same subnet the VNIC is
in, chosen by either you or Oracle.
l An optional public IPv4 address for each private IP, chosen by Oracle but assigned by
you at your discretion.
l An optional hostname for DNS for each private IP address (see DNS in Your Virtual
Cloud Network).
l A MAC address.
l A VLAN tag assigned by Oracle and available when attachment of the VNIC to the
instance is complete (relevant only for bare metal instances).
l A flag to enable or disable the source/destination check on the VNIC's network traffic
(see Source/Destination Check).
Each VNIC also has a friendly name you can assign, and an Oracle-assigned OCID (see
Resource Identifiers).
Each instance has a primary VNIC that is automatically created and attached during launch.
That VNIC resides in the subnet you specify during launch. The primary VNIC cannot be
removed from the instance.
The OS on a bare metal instance recognizes two physical network devices and configures
them as two physical NICs, 0 and 1. Whether they're both active depends on the underlying
hardware:
NIC 0 is automatically configured with the primary VNIC's IP configuration (the IP addresses,
DNS hostname, and so on).
If you add a secondary VNIC to a second-generation instance, you must specify which
physical NIC the secondary VNIC should use. You must also configure the OS so that the
physical NIC has the secondary VNIC's IP configuration. See Configuration Script for Either
Linux VM Instances or Linux Bare Metal Instances.
You can add secondary VNICs to an instance after it's launched. The secondary VNIC can be in
a subnet in the same VCN as the primary VNIC or a different VCN. However, all the VNICs
must be in subnets in the same availability domain as the instance.
Here are a few reasons why you might use secondary VNICs:
l Use your own hypervisor on a bare metal instance: The virtual machines on the
bare metal instance each have their own secondary VNIC, giving direct connectivity to
other instances and services in the VNIC's VCN. For more information, see Installing
and Configuring KVM on Bare Metal Instances with Multi-VNIC.
l Connect an instance to multiple subnets in a VCN: For example, you might have
a network appliance to monitor traffic between subnets, so the instance needs to
connect to multiple subnets in the VCN.
l Connect an instance to multiple VCNs: For example, you might have resources that
need to be shared between two teams that each have their own VCN.
l They're supported only for Linux instances (both VM and bare metal). Also see Linux:
Configuring the OS for Secondary VNICs .
l There's a limit to how many VNICs can be attached to an instance, and it varies by
shape. For those limits, see the tables in shape.
l They can be added only after the instance is launched.
l They must always be attached to an instance and cannot be moved. The process of
creating a secondary VNIC automatically attaches it to the instance. The process of
detaching a secondary VNIC automatically deletes it.
l They are automatically detached and deleted when you terminate the instance.
l The instance's bandwidth is fixed regardless of the number of VNICs attached. You can't
specify a bandwidth limit for a particular VNIC on an instance.
l Attaching multiple VNICs from the same subnet CIDR block to an instance can introduce
asymmetric routing, especially on instances using a variant of Linux. If you need this
type of configuration, Oracle recommends assigning multiple private IP addresses to
one VNIC, or using policy-based routing as shown in the script later in this topic.
Source/Destination Check
By default, every VNIC performs the source/destination check on its network traffic. The VNIC
looks at the source and destination listed in the header of each network packet. If the VNIC is
not the source or destination, then the packet is dropped.
If the VNIC needs to forward traffic (for example, if it needs to perform Network Address
Translation (NAT)), you must disable the source/destination check on the VNIC. For
instructions, see To update an existing VNIC. For information about the general scenario, see
Using a Private IP as a Route Target.
The instance metadata includes information about the VNICs at this URL:
http://169.254.169.254/opc/v1/vnics/
"virtualRouterIp" : "10.0.4.1",
"subnetCidrBlock" : "10.0.4.0/24",
"nicIndex" : 0
} ]
1. Confirm you're viewing the compartment that contains the instance you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs.
The primary VNIC and any secondary VNICs attached to the instance are displayed.
5. For the VNIC you want to edit, click the Actions icon ( ), and then click Edit VNIC.
6. Make your changes and click Update VNIC.
1. Confirm you're viewing the compartment that contains the instance you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs.
The primary VNIC and any secondary VNICs attached to the instance are displayed.
5. For the VNIC you want to delete, click the Actions icon ( ), and then click Delete.
6. Confirm when prompted.
If you then run the provided script in Linux: Configuring the OS for Secondary VNICs , it
removes the secondary VNIC from the OS configuration.
At the end is a script that you can use to configure secondary VNICs on either VM instances or
bare metal instances.
Linux VM Instances
When you add a secondary VNIC to a Linux VM instance, a new interface (that is, an Ethernet
device) is added to the instance and automatically recognized by the OS. However, DHCP is
not active for the secondary VNIC, and you must configure the interface with the static IP
address and default route. The script provided here handles that configuration for you.
When you add a secondary VNIC to a Linux bare metal instance, the OS does not
automatically recognize the secondary VNIC, so you must configure it in the OS. Depending on
your requirements, you can configure it as either:
l An SR-IOV virtual function (see Installing and Configuring KVM on Bare Metal Instances
with Multi-VNIC).
l A VLAN-tagged interface (see the script in the following section).
Configuration Script for Either Linux VM Instances or Linux Bare Metal Instances
The following script works for both VM instances and bare metal instances. It looks at the
secondary VNIC information in the instance metadata and configures the OS accordingly. You
could run the script periodically to bring the OS configuration up to date with the instance
metadata.
For bare metal instances in particular, the script creates the interface for the secondary VNIC
and configures it with the relevant information. If the instance has two active physical NICs
(an X7/second-generation shape with NIC 0 and NIC 1), the script configures the secondary
VNIC to use whichever physical NIC you chose when you added the VNIC to the instance. Note
that for NIC 1, if a secondary VNIC has VLAN tag 0, it uses the NIC's interface. The script
doesn't create an interface for that secondary VNIC.
Here are some additional notes about how the script works for both VM instances and bare
metal instances:
l Default namespace and policy-based routing: By default, this script configures the
secondary VNIC in the default namespace and with policy-based routing so applications
can communicate through the VNIC with hosts outside the VNIC's subnet. This policy-
based routing has effect only if the packets are sourced from the IP address
of the secondary VNIC. The ability to bind to a specific source IP address or source
interface exists in most tools (such as ssh, ping, and wget) and libraries that initiate
connections. For example, the ssh -b option lets you bind to the private IP address of
the secondary VNIC:
ssh -b <secondary_VNIC_IP_address> <destination_IP_address>
Private IP Addresses
This topic describes how to manage the IP addresses assigned to an instance in a virtual cloud
network (VCN).
Overview of IP Addresses
Instances use IP addresses for communication. Each instance has at least one private IP
address and at least one optional public IP address. A private IP address enables the instance
to communicate with other instances inside the VCN, or with hosts in your on-premises
network (via an IPSec VPN or Oracle Cloud Infrastructure FastConnect). A public IP address
enables the instance to communicate with hosts on the internet. For more information, see
these related topics:
The Networking service defines an object called a private IP, which consists of:
Each private IP object has an Oracle-assigned OCID (see Resource Identifiers). If you're using
the API, you can also assign each private IP object a friendly name.
Each instance receives a primary private IP object during launch. The primary private IP
cannot be removed from the instance. It's automatically terminated when the instance is
terminated.
If an instance has any secondary VNICs attached, each of those VNICs also has a primary
private IP.
A private IP can be the target of a route rule in your VCN. For more information, see Using a
Private IP as a Route Target.
You can add a secondary private IP to an instance after it's launched. You can add it to either
the primary VNIC or a secondary VNIC on the instance. The secondary private IP address
must come from the CIDR of the VNIC's subnet. You can move a secondary private IP from a
VNIC on one instance to a VNIC on another instance if both VNICs belong to the same subnet.
Here are a few reasons why you might use secondary private IPs:
services in the VCN. Another example: you could run multiple SSL websites with each
one using its own IP address.
l They're supported for all shapes and OS types, for both bare metal and VM instances.
l A VNIC can have a maximum of 31 secondary private IPs.
l They can be assigned only after the instance is launched (or the secondary VNIC is
created/attached).
l Unassigning a secondary IP from a VNIC returns the address to the pool of available
addresses in the subnet.
l They are automatically unassigned when you terminate the instance (or detach/delete
the secondary VNIC).
l The instance's bandwidth is fixed regardless of the number of private IP addresses
attached. You can't specify a bandwidth limit for a particular IP address on an instance.
l A secondary private IP can have a reserved public IP assigned to it at your discretion.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
The primary private IP and any secondary private IPs assigned to the VNIC are displayed.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
7. Click Assign.
The secondary private IP is created and then displayed on the IP Addresses page for
the VNIC.
8. Configure the IP address:
l For instances running a variant of Linux, see Linux: Details about Secondary IP
Addresses.
l For Windows instances, see Windows: Details about Secondary IP Addresses.
l Hostname: Optional. The hostname to be used for DNS within the cloud network.
Available only if the VCN and subnet both have DNS labels. See DNS in Your
Virtual Cloud Network.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
7. Click Assign.
The private IP address is moved from the original VNIC to the new VNIC.
Prerequisite: Oracle recommends removing the IP address from the OS configuration before
deleting it from the Networking. See Linux: Details about Secondary IP Addresses or
Windows: Details about Secondary IP Addresses.
1. Confirm you're viewing the compartment that contains the instance you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs, and then click the VNIC you're interested in.
5. For the IP address you want to unassign, click the Actions icon ( ), and then click
Unassign.
6. Confirm when prompted.
The private IP address is returned to the pool of available addresses in the subnet.
l GetPrivateIp: Use this to get a single privateIp object by specifying its OCID.
l ListPrivateIps: Use this to get a single privateIp object by specifying the private IP
address (for example, 10.0.3.3) and the subnet's OCID. Or you can list all the
privateIp objects in a given subnet, or just the ones assigned to a given VNIC.
l CreatePrivateIp: Use this to assign a new secondary private IP to a VNIC.
l UpdatePrivateIp: Use this to reassign a secondary private IP to a different VNIC in the
same subnet, or to update the hostname or display name of a secondary private IP.
l DeletePrivateIp: Use this to remove a secondary private IP from a VNIC. The private IP
address is returned to the subnet's pool of available addresses.
On the instance, run the following command. It works on all variants of Linux, for both bare
metal and VM instances:
ip addr add <address>/<subnet_prefix_len> dev <phys_dev> label <phys_dev>:<addr_seq_num>
where:
For example:
ip addr add 192.168.20.50/24 dev ens2f0 label ens2f0:0
Also make sure to unassign the secondary IP from the VNIC. You can do that before or after
executing the above command to delete the address from the OS configuration.
You can make the configuration persistent through a reboot by adding the information to a
configuration file.
For Ubuntu
Add the following information to the end of the /etc/network/interfaces file:
auto <phys_dev>:<addr_seq_num>
iface <phys_dev>:<addr_seq_num> inet static
address <address>
netmask <address_netmask>
Where the netmask is not the prefix but the 255.255.x.x. address.
1. In your browser, go to the Console, and note the secondary private IP address that you
want to configure on the instance.
2. Connect to the instance, and run the following command at a command prompt:
ipconfig /all
3. Note the values for the following items so you can enter them into the script in the next
step:
l Default Gateway
l DNS Servers
4. Replace the variables in the following PowerShell script with your own values:
netadapter = Get-NetAdapter -Name Ethernet
$netadapter | Set-NetIPInterface -DHCP Disabled
$netadapter | New-NetIPAddress -AddressFamily IPv4 -IPAddress <secondary_IP_address> -
PrefixLength <subnet_prefix_length> -Type Unicast -DefaultGateway <default_gateway>
Set-DnsClientServerAddress -InterfaceAlias Ethernet -ServerAddresses <DNS_server>
For example:
netadapter = Get-NetAdapter -Name Ethernet
$netadapter | Set-NetIPInterface -DHCP Disabled
$netadapter | New-NetIPAddress -AddressFamily IPv4 -IPAddress 192.168.11.14 -PrefixLength 24 -
Type Unicast -DefaultGateway 192.168.11.1
Set-DnsClientServerAddress -InterfaceAlias Ethernet -ServerAddresses 169.254.169.254
5. Save the script with the name of your choice and a .ps1 extension, and run it on the
instance.
If you run ipconfig /all again, you'll see that DHCP has been disabled and the
secondary private IP address is included in the list of IP addresses.
Later if you want to delete the address, you can use this command:
Remove-NetIPAddress -IPAddress 192.168.11.14 -InterfaceAlias Ethernet
Also make sure to unassign the secondary IP from the VNIC. You can do that before or after
executing the above command to delete the address from the OS configuration.
1. In your browser, go to the Console, and note the secondary private IP address that you
want to configure on the instance.
2. Connect to the instance, and run the following command at a command prompt:
ipconfig /all
3. Note the values for the following items so you can enter them elsewhere in a later step:
l IPv4 Address
l Subnet Mask
l Default Gateway
l DNS Servers
4. In the instance's Control Panel, open the Network and Sharing Center (see the
image below for the set of dialog boxes you'll see in these steps).
5. For the active networks, click the connection (Ethernet).
6. Click Properties.
7. Click Internet Protocol Version 4 (TCP/IPv4), and then click Properties.
8. Select the radio button for Use the following IP address, and then enter the values
you noted earlier for the IP address, subnet mask, default gateway, and DNS servers.
9. Click Advanced....
10. Under IP addresses, click Add....
11. Enter the secondary private IP address and the subnet mask you used earlier and click
Add.
You should now see that DHCP is disabled (static IP addressing is enabled), and the
secondary private IP address is in the list of addresses displayed. The address is now
configured on the instance and available to use.
Public IP Addresses
This topic describes how to manage public IP addresses on instances in a virtual cloud
network (VCN).
The assignment is actually to a private IP object on the resource. The VNIC that the private IP
is assigned to must be in a public subnet. A given resource can have multiple secondary
VNICs. And a given VNIC can have multiple secondary private IPs. So you can assign a given
resource multiple public IPs across one or more VNICs if you like.
For a public IP address to be reachable over the internet, the VCN it's in must have an internet
gateway, and the public subnet must have route tables and security lists configured
accordingly.
The Networking service defines an object called a public IP, which consists of:
Each public IP object has an Oracle-assigned OCID (see Resource Identifiers). If you're using
the API, you can also assign each public IP object a friendly name.
l Ephemeral: Think of it as temporary and existing for the lifetime of the instance.
l Reserved: Think of it as persistent and existing beyond the lifetime of the instance it's
assigned to. You can unassign it and then reassign it to another instance whenever you
like.
The following table summarizes the differences between the two types.
Creation Optionally created and assigned You create one at any time. You
during instance launch or secondary can then assign it when you like.
VNIC creation. You can create and
Limit: You can create 50 per
assign one later if the VNIC doesn't
region
already have one.
Unassignment You can unassign it at any time, You can unassign it at any time,
which deletes it. You might do this if which returns it to your tenancy's
whoever launched the instance pool of reserved public IPs.
included a public IP, but you don't
want the instance to have one.
Automatic Its lifetime is tied to the private IP's Never. Exists until you delete it.
deletion lifetime. Automatically unassigned
and deleted when:
Compartment Same as the private IP's Can be different from the private
and IP's
availability
domain
When you launch an instance in a public subnet, by default, the instance gets a public IP
unless you say otherwise. See To choose whether an ephemeral public IP is assigned when
launching an instance.
After you create a given public IP, you can't change which type it is. For example, if you
launch an instance that is assigned an ephemeral public IP with address 129.146.1.9, you
can't convert the ephemeral public IP to a reserved public IP with address 129.146.1.9.
The preceding table notes the public IPs limits per VNIC and instance. If you try to perform
any operation that assigns or moves a public IP to a VNIC or instance that has already
reached its public IP limit, an error is returned. The operations include:
l Assigning a public IP
l Creating a new secondary VNIC with a public IP
l Moving a private IP with a public IP to another VNIC
l Moving a public IP to another private IP
In the Create VNIC dialog box, there's an Assign public IP address checkbox. By default,
the checkbox is NOT checked, which means the secondary VNIC does not get an ephemeral
public IP. You must check the checkbox.
1. Confirm you're viewing the compartment that contains the instance you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs.
The primary VNIC and any secondary VNICs attached to the instance are displayed.
5. Click the VNIC you're interested in.
The VNIC's primary private IP and any secondary private IPs are displayed.
6. For the VNIC's primary private IP, click the Actions icon ( ), and then click Edit.
7. In the Public IP Address section, for Public IP type, select the radio button for
Ephemeral Public IP.
8. In the Ephemeral Public IP Name field, enter an optional friendly name for the
public IP. The name doesn't have to be unique, and you can change it later. Avoid
entering confidential information.
9. Click Update.
1. Confirm you're viewing the compartment that contains the instance you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs.
The primary VNIC and any secondary VNICs attached to the instance are displayed.
5. Click the VNIC you're interested in.
The VNIC's primary private IP and any secondary private IPs are displayed.
6. For the VNIC's primary private IP, click the Actions icon ( ), and then click Edit.
7. In the Public IP Address section, for Public IP Type, select the radio button for No
Public IP.
8. Click Update.
The reserved public IPs in the selected region and compartment are displayed. The status of
each is shown (whether assigned or available). You can click and view a particular reserved
public IP's information. If the reserved public IP is assigned, the information includes links to
the relevant instance and VNIC.
The new reserved public IP is created and displayed on the page. You can now assign it to an
existing private IP if you like.
1. Confirm you're viewing the region and compartment that contains the reserved public
IP.
2. Click Networking, and then click Public IPs.
3. For the reserved public IP you want to delete, click the Actions icon ( ), and then
click Delete.
4. Confirm when prompted.
After a few seconds, the reserved public IP is unassigned (if it was assigned) and deleted from
your pool.
1. Confirm you're viewing the compartment that contains the instance with the private IP
you're interested in.
2. Click Compute, and then click Instances.
3. Click the instance to view its details.
4. Click Attached VNICs.
The primary VNIC and any secondary VNICs attached to the instance are displayed.
5. Click the VNIC you're interested in.
The VNIC's primary private IP and any secondary private IPs are displayed.
6. For the private IP you're interested in, click the Actions icon ( ), and then click Edit.
7. In the Public IP Address section, for Public IP Type, select the radio button for
Reserved Public IP.
8. Enter the following:
l Compartment: The compartment that contains the reserved public IP you want
to assign.
l Reserved Public IP: The reserved public IP you want to assign. You have three
choices:
o Create a new reserved public IP. You may optionally provide a friendly
name for it. The name doesn't have to be unique, and you can change it
later. Avoid entering confidential information.
1. Confirm you're viewing the compartment that contains the instance with private IP 2.
2. Click Compute, and then click Instances.
5. Click Save.
l Internet Resolver: Lets instances use hostnames that are publicly published on
the internet. The instances do not need to have internet access by way of either an
internet gateway or an IPSec VPN connection (via a DRG).
l VCN Resolver: Lets instances use hostnames (which you can assign) to
communicate with other instances in the VCN. For more information, see About the
DNS Domains and Hostnames.
By default, new VCNs you create use the Internet and VCN Resolver. If you're using the
Networking API, this choice refers to the VcnLocalPlusInternet enum in the
DhcpDnsOption object.
Note: The Internet and VCN Resolver does not let instances
resolve the hostnames of hosts in your on-premises
network connected to your VCN by IPSec VPN connection.
Use your own custom DNS resolver to enable that.
CUSTOM RESOLVER
Use your own DNS servers (maximum three). They could be DNS servers that are:
l Available through the internet. For example, 216.146.35.35 for Dyn's Internet Guide.
l In your VCN.
l In your on-premises network, which is connected to your VCN by way of an IPSec VPN
connection or FastConnect (through a DRG).
When you then launch an instance, you may assign a hostname. It's assigned to the VNIC
that's automatically created during instance launch (that is, the primary VNIC). Along with the
subnet doman name, the hostname forms the instance's fully qualified domain name (FQDN):
The FQDN resolves to the instance's private IP address. The Internet and VCN Resolver also
enables reverse DNS lookup, which lets you determine the hostname corresponding to the
private IP address.
If you add a secondary VNIC to the instance, you can specify a hostname. The resulting FQDN
resolves to the VNIC's private IP address (that is, the primary private IP).
If you add a secondary private IP to a VNIC, you can specify a hostname. The resulting FQDN
resolves to that private IP address.
l VCN and subnet labels: Max 15 alphanumeric characters and must start with a letter.
Cannot be changed later.
l Hostnames: Max 63 characters and must be compliant with RFCs 952 and 1123. Can be
changed later.
Uniqueness:
l VCN DNS label should be unique across your VCNs (not required, but a best practice)
l Subnet DNS labels must be unique within the VCN
l Hostnames must be unique within the subnet
If you've set DNS labels for the VCN and subnets, Oracle validates the hostname for DNS
compliance and uniqueness during instance launch. If either of these requirements isn't met,
the launch request fails.
If you don't specify a hostname during instance launch, Oracle tries to use the instance's
display name as the hostname. If the display name does not pass the validation, Oracle
automatically generates a DNS-compliant hostname that's unique across the subnet. You can
see the generated hostname on the instance's page in the Console. In the API, the hostname
is part of the VNIC object.
If you don't provide a hostname or display name during instance launch, Oracle automatically
generates a display name but NOT a hostname. This means the instance won't be resolvable
using the Internet and VCN Resolver.
If you add a secondary VNIC to an instance, or add a secondary private IP to a VNIC, Oracle
never tries to generate a hostname. Provide a valid hostname if you want the private IP
address to be resolvable using the Internet and VCN Resolver.
l Domain Name Server: To specify your choice for DNS type (either Internet and VCN
Resolver, or Custom Resolver).
o Default value in the default set of DHCP options: Internet and VCN
Resolver
l Search Domain: To specify a single search domain. When resolving a DNS query, the
OS appends this search domain to the value being queried.
o Default value in the default set of DHCP options: The VCN domain (<VCN
DNS label>.oraclevcn.com), if you specified a DNS label for the VCN during
creation. If you didn't, the default set of DHCP options does not include a Search
Domain option.
The default set of DHCP options that you can associate with all the subnets in the VCN
automatically uses the default values. This means you can simply use the
<hostname>.<subnet DNS label> to communicate with any of the instances in the VCN. If
the VCN uses a set of DHCP options that does not contain a Search Domain option, the
instances must use the entire FQDN to communicate.
Only new VCNs created after the release of the Internet and VCN Resolver feature have
automatic access to it. How to enable DNS hostnames for a new VCN depends on which
interface you're using.
and VCN Resolver in your VCN and automatically generates a DNS label for the
VCN. The Console takes the VCN name you provided, removes non-alphanumeric
characters, ensures that the first character is a letter, and truncates the label to
15 characters. The Console displays the result, and if you don't like it, you can
instead enter your own value in the DNS Label field. See Requirements for DNS
Labels and Hostnames.
2. When creating the subnets:
l Again, check the checkbox for Use DNS Hostnames in this Subnet
l Specify a DNS label of your choice for each subnet. If you check the checkbox but
don't specify the DNS label for a given subnet, the Console assumes you want to
use the Internet and VCN Resolver for the subnet and automatically generates a
DNS label for the subnet. The Console takes the subnet name you provided,
removes non-alphanumeric characters, ensures that the first character is a letter,
and truncates the label to 15 characters. The Console displays the result, and if
you don't like it, you can instead enter your own value in the DNS Label field.
See Requirements for DNS Labels and Hostnames.
l Associate any set of DHCP options that has DNS type = Internet and VCN
Resolver. The default set of DHCP options in the VCN uses the Internet and VCN
Resolver by default.
3. When launching instances:
l Specify a hostname (or at least a display name) for each instance. For more
information, see Validation and Generation of the Hostname.
If you don't check the checkbox for Use DNS Hostnames in this VCN when creating the
VCN, you can't set the DNS label for the VCN or subnets, and you can't specify a hostname
during instance launch.
Note: The above procedure assumes you create the VCN and
subnets one at a time in the Console. The Console has a
feature that automatically creates a VCN with subnets and an
internet gateway all at the same time. If you use that feature
to create the VCN and subnets, the Console automatically
generates DNS labels for them.
If you don't specify a DNS label when creating the VCN, you can't set the DNS label for the
subnets (the CreateSubnet call will fail), nor specify a hostname during instance launch (the
LaunchInstance call will fail). You also cannot assign a hostname to a secondary VNIC or a
secondary private IP.
Scenario 1: Use Internet and VCN Resolver with DNS Hostnames Across the VCN
The typical scenario is to enable the Internet and VCN Resolver across your entire VCN. This
means all instances in the VCN can communicate with each other without knowing their IP
addresses. To do that, follow the instructions for creating a new VCN in How to Enable DNS
Hostnames in Your VCN, and make sure to assign a DNS label to the VCN and every subnet.
Then make sure to assign every instance a hostname (or at least a display name) at launch. If
you add a secondary VNIC or secondary private IP, also assign it a hostname. The instances
can then communicate with each other using FQDNs instead of IP addresses. If you also set
the Search Domain DHCP option to the VCN domain name (<VCN DNS
label>.oraclevcn.com), the instances can then communicate with each other using just
<hostname>.<subnet DNS label> instead of the FQDN.
You can set up an instance to be a custom DNS server within your VCN and configure it to
resolve the hostnames that you set when launching the instances. You must configure the
servers to use 169.254.169.254 as the forwarder for the VCN domain (that is, <VCN DNS
label>.oraclevcn.com).
For an example of an implementation of this scenario with the Oracle Terraform provider, see
Hybrid DNS Configuration.
Scenario 1 assumes you want to use the Internet and VCN Resolver the same way across all
subnets, and thus all instances in the VCN. You could, however, configure different DNS
settings for each subnet, because the DHCP options are configured at the subnet level. The
important thing to understand is this: The subnet where you want to generate the DNS query
is where you need to configure the corresponding Internet and VCN Resolver settings.
For example, if you want instance A in subnet A to resolve the hostname of instance B in
subnet B, you must configure subnet A to use the Internet and VCN Resolver. Conversely, if
you want instance B to resolve the hostname of instance A, you must configure subnet B to
use the Internet and VCN Resolver.
You can configure a different set of DHCP options for each subnet. For example, you could set
subnet A's Search Domain to subnet-a.vcn-1.oraclevcn.com, which means all instances in
subnet A could use just hostnames to communicate with each other. You could similarly set
subnet B's Search domain to subnet-b.vcn-1.oraclevcn.com to enable Subnet B's instances
to communicate with each other with just hostnames. But that means any instances in a given
subnet would need to use FQDNs to communicate with instances in other subnets.
DOMAIN
Domain names identify a specific location or group of locations on the Internet as a whole.
A common definition of "domain" is the complete portion of the DNS tree that has been
delegated to a user's control. For example, example.com or oracle.com.
ZONE
A zone is a portion of the DNS namespace. A Start of Authority record (SOA) defines a
zone. A zone contains all labels underneath itself in the tree, unless otherwise specified.
LABEL
Labels are prepended to the zone name, separated by a period, to form the name of a
subdomain. For example, the "www" section of www.example.com or the "docs" and "us-
ashburn-1" sections of docs.us-ashburn-1.oraclecloud.com are labels. Records are
associated with these domains.
CHILD ZONE
Child zones are independent subdomains with their own Start of Authority and Name
Server (NS) records. The parent zone of a child zone must contain NS records that refer
DNS queries to the name servers responsible for the child zone. Each subsequent child
zone creates another link in the delegation chain.
RESOURCE RECORDS
A record contains specific domain information for a zone. Each record type contains
information called record data (RDATA). For example, the RDATA of an A or AAAA record
contains an IP address for a domain name, while MX records contain information about
the mail server for a domain.
DELEGATION
To access the Console, you must use a supported browser. You can use the Console link at the
top of this page to go to the sign-in page. Enter your tenancy, user name, and your password.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
If you're new to policies, see Getting Started with Policies and Common Policies. For more
details about policies for DNS, see Details for the DNS Service.
To add a zone
1. Open the Console, click Networking, and then click DNS.
2. Click Add Zone.
3. In the Add Zone dialog box, choose one of the following methods:
l Manual - Enter the following:
o Zone Name: Enter the name of a zone you want to create.
o Zone Type: If you want to control the zone contents directly within OCI,
select Primary. If you want OCI to pull zone contents from an external
server, select Secondary and enter your Zone Master Server IP
address.
l Import - Drag and drop, select, or paste a valid zone file into the Import Zone
File window. The zone is imported as a Primary zone.
4. Click Continue.
5. Click Publish Changes.
6. In the confirmation dialog box, click OK.
The system creates and publishes the zone, complete with the necessary SOA and NS records.
For more information on adding a record to your zone, see Add a Zone Record.
2. Click the Actions icon ( ) for the secondary zone you want to update.
3. Click Manage Master Servers. A list of Master Server IPs for the zone appears.
4. Click the Actions icon ( ) for the Master Server IP you want to update, and then
click Edit.
5. Make the needed changes, and then click Update.
6. (Optional) Click Add Master Server to add another Master Server IP address.
7. Click Publish Changes.
8. In the confirmation dialog box, click OK.
To delete a zone
There are many record types you can add to your zone,
Protected Records
You can revert records to their current published state before you publish changes. Once a
record has been published, it cannot be reverted.
1. Click the Actions icon ( ) for the record you want to revert, and then click Revert.
A confirmation message shows the changes to discard.
2. Click Discard to discard the changes.
2. Click the Actions icon ( ) for the zone from which you want to delete a record.
To delegate a zone
To make your Oracle Cloud Infrastructure hosted zone accessible through the internet, you
must delegate your domain with your domain's registrar.
l GetZone
l ListZones
l CreateZone
l UpdateZone
l DeleteZone
l PatchZoneRecords (add or delete records)
l UpdateZoneRecords
DNS Reports show how much DNS traffic your zones have received for up to the past 90 days.
Use DNS query counts by zone to understand the distribution of queries or ensure that new
zones and configurations work correctly. DNS query counts include the last 30 days by
default. Use the Date Range option to view query counts for different periods of time.
To view DNS reports, open the Console, click Networking, and then click DNS Reporting.
l Total Queries - The sum of all queries across all zones for the selected date range.
l Zone Name - The name of the zone that received queries in the selected date range.
l Query Count - The number of DNS queries received by your zones.
l Query Percentage (%) - The percentage of total queries for each zone.
To view DNS Reports data for a specific date or range, select a date option from the Time
Range drop-down list. Options include:
A
An address record used to point a hostname to an IPv4 address. For more information
about A records, see RFC 1035.
AAAA
An address record used point a hostname at an IPv6 address. For more information about
AAAA records, see RFC 3596.
ALIAS
A private pseudo-record that allows CNAME functionality at the apex of a zone. You can
view and read ALIAS records in Oracle Cloud Infrastructure DNS, but you cannot create
them.
CDNSKEY
A Child DNSKEY moves a CDNSSEC key from a child zone to a parent zone. The
information provided in this record must match the CDNSKEY information for your domain
at your other DNS provider. This record is automatically created if you enable DNSSEC on
a primary zone in Oracle Cloud Infrastructure DNS. For more information about CDNSKEY,
see RFC 7344.
CDS
A Child Delegation Signer record is a child copy of a DS record, for transfer to a parent
zone. For more information about CDS records, see RFC 7344.
CERT
A Certificate record stores public key certificates and related certificate revocation lists in
the DNS. For more information about CERT records, see RFC 2538 and RFC 4398.
CNAME
A Canonical Name record identifies the canonical name for a domain. For more
information about CNAME records, see RFC 1035.
CSYNC
A Child-to-Parent Synchronization record syncs records from a child zone to a parent
zone. For more information about CNAME records, see RFC 7477.
DHCID
A DHCP identifier record provides a way to store DHCP client identifiers in the DNS to
eliminate potential hostname conflicts within a zone. For more information about DHCID,
see RFC 4701.
DKIM
A Domain Keys Identified Mail is a special TXT record set up specifically to supply a public
key used to authenticate arriving mail for a domain. For more information about DKIM
records, see RFC 6376.
DNAME
A Delegation Name record has similar behavior to a CNAME record, but allows you to map
an entire subtree beneath a label to another domain. For more information about DNAME
records, see RFC 6672.
DNSKEY
A DNS Key record documents public keys used for DNSSEC. The information in this record
must match the DNSKEY information for your domain at your other DNS provider. For
more information about DNSKEY records, see RFC 4034.
DS
A Delegation Signer record resides at the top-level domain and points to a child zone's
DNSKEY record. DS records are created when DNSSEC security authentication is added to
the zone. For more information about DS records, see RFC 4034.
IPSECKEY
An IPSec Key record stores public keys for a host, network, or application to connect to IP
security (IPSec) systems. For more information on IPSECKEY records, see RFC 4025.
KEY
A Key record stores a public key that is associated with a domain name. Currently only
used by SIG and TKEY records. IPSECKEY and DNSKEY have replaced key for use in IPSec
and DNSSEC, respectively. For more information about KEY records, see RFC 4025.
KX
A Key Exchanger record identifies a key management agent for the associated domain
name with some cryptographic systems (not including DNSSEC). For more information
about KX records, see RFC 2230.
LOC
A Location record stores geographic location data of computers, subnets, and networks
within the DNS. For more information about LOC records, see RFC 1876.
MX
A Mail Exchanger record defines the mail server accepting mail for a domain. MX records
must not point to a CNAME or IP address. For more information about MX records, see
RFC 1035.
NS
A Nameserver record lists the authoritative nameservers for a zone. Oracle Cloud
Infrastructure DNS automatically generates NS records at the apex of each new primary
zone. For more information about NS records, see RFC 1035.
PTR
A Pointer record reverse maps an IP address to a hostname. This behavior is the opposite
of an A Record, which forward maps a hostname to an IP address. PTR records are
commonly found in reverse DNS zones. For more information about PTR records, see RFC
1035.
PX
A resource record used in X.400 mapping protocols. For more information about PX
records, see RFC 822 and RFC 2163.
SOA
The Oracle Cloud Infrastructure DNS automatically generates an SOA record when a zone
is created. For more information about SOA records, see RFC 1035.
SPF
A Sender Policy Framework record is a special TXT record used to store data designed to
detect email spoofing. For more information about SPF records, see RFC 4408.
SRV
A Service Locator record allows administrators to use several servers for a single domain.
For more information about SRV records, see RFC 2782.
SSHFP
An SSH Public Key Fingerprint record publishes SSH public host key fingerprints using the
DNS. For more information about SSHFP records, see RFC 6594.
TLSA
A Transport Layer Security Authentication record associates a TLS server certificate, or
public key, with the domain name where the record is found. This relationship is called a
TLSA certificate association. For more information about TLSA records, see RFC 6698.
TXT
A Text record holds descriptive, human readable text, and can also include non-human
readable content for specific uses. It is commonly used for SPF records and DKIM records
that require non-human readable text items. For more information about TXT records, see
RFC 1035.
Route Tables
This topic describes how to manage the route tables in a virtual cloud network (VCN).
l internet gateway: Use this target with a public subnet that needs access to the internet.
l dynamic routing gateway (DRG): Use this target with any subnet that needs private
access to networks connected to your VCN (for example, over an IPSec VPN or
FastConnect).
l local peering gateway (LPG): Use this target with any subnet that needs access to a
peered VCN in the same region.
l private IP: Use this target with any subnet that needs to route traffic to an instance in
the VCN. For more information, see Using a Private IP as a Route Target.
Each VCN automatically comes with a default route table that has no rules. If you don't specify
otherwise, every subnet uses the VCN's default route table. When you add route rules to your
VCN, you can simply add them to the default table if that suits your needs. However, if you
need both a public subnet and a private subnet (for example, see Scenario C: Public and
Private Subnets with a VPN), you instead create a separate route table for each subnet.
Each subnet in a VCN uses a single route table. When you create the subnet, you specify which
one to use. You can't change which route table a subnet uses after the subnet is created, so
make sure to create the route table before creating the subnet. However, remember that you
can also change a table's rules.
You may optionally assign a friendly name to the route table during creation. It doesn't have
to be unique, and you can change it later. Oracle automatically assigns the route table a
unique identifier called an Oracle Cloud ID (OCID). For more information, see Resource
Identifiers.
When adding a route rule to a route table, you provide the destination CIDR block and target
(plus the compartment where the target resides). If you misconfigure a rule (for example,
enter the wrong CIDR block), the network traffic you intended to route will be dropped
(blackholed) if there's no other rule in the table that matches that trafic.
To delete a route table, it must not be associated with a subnet yet. You can't delete a VCN's
default route table.
For information about the maximum number of route tables and route rules, see Service
Limits.
You can use a private IP as the target of a route rule in situations where you want to route a
subnet's traffic to another instance. Here are a few reasons you might do this:
l To implement Network Address Translation (NAT) in the VCN, which enables outbound
internet access for instances that don't have direct internet connectivity.
l To implement a virtual network function (such as a firewall or intrusion detection) that
filters outgoing traffic from instances.
l To manage an overlay network on the VCN, which lets you run container orchestration
workloads.
To implement these use cases, there's more to do than simply route traffic to the instance.
There's also configuration required on the instance itself.
l The private IP's VNIC must be configured to skip the source/destination check so that
the VNIC can forward traffic. By default, VNICs are configured to perform the check.
For more information, see Source/Destination Check.
l The route rule must specify the OCID of the private IP as the target, and not the IP
address itself. Exception: If you use the Console, you can instead specify the private IP
address itself as the target, and the Console determines and uses the corresponding
OCID in the rule.
1. Determine which instance will receive and forward the traffic (the NAT instance, for
example).
You must configure the instance itself to forward packets. For more information, see NAT
Instance Configuration.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of the cloud networks in the compartment you're viewing is displayed. If you don’t
see the one you're looking for, make sure you’re viewing the correct compartment
(select from the list on the left side of the page).
2. Click the cloud network you're interested in.
3. Click Route Tables.
4. Click the route table you're interested in.
5. Click Edit Route Rules.
6. If you want to create a new route rule, click + Another Route Rule and enter the
following:
l Destination CIDR Block: The destination CIDR block for the traffic. A value of
0.0.0.0/0 means that all non-intra-VCN traffic that is not already covered by other
rules in the route table will go to the target specified in this rule.
l Target Type: See the list of target types in Working with Route Tables. If you
choose a private IP for the target, make sure you've first disabled the
source/destination check on the private IP's VNIC. For more information, see
Using a Private IP as a Route Target.
l Compartment: The compartment where the target is located.
l Target: The target. If the target is a private IP, enter its OCID. Or you can enter
the private IP address itself, in which case the Console determines the
corresponding OCID and uses it as the target for the route rule.
7. If you want to delete an existing route rule, click the X next to the rule.
8. If you wanted to edit an existing rule, make your changes to the rule.
9. Click Save.
l Create in Compartment: The compartment where you want to create the route
table, if different from the compartment you're currently working in.
l Name: A friendly name for the route table. The name doesn't have to be unique,
and it cannot be changed later in the Console (but you can change it with the API).
Avoid entering confidential information.
6. Add at least one route rule with the following information:
l Destination CIDR Block: The destination CIDR block for the traffic. A value of
0.0.0.0/0 means that all non-intra-VCN traffic that is not already covered by other
rules in the route table will go to the target specified in this rule.
l Target Type: See the list of target types in Working with Route Tables. If you
choose a private IP for the target, make sure you've first disabled the
source/destination check on the private IP's VNIC. For more information, see
Using a Private IP as a Route Target.
l Compartment: The compartment where the target is located.
l Target: The target. If the target is a private IP, enter its OCID. Or you can enter
the private IP address itself, in which case the Console determines the
corresponding OCID and uses it as the target for the route rule.
7. Tags: Optionally, you can apply tags. If you have permissions to create a resource, you
also have permissions to apply free-form tags to that resource. To apply a defined tag,
you must have permissions to use the tag namespace. For more information about
tagging, see Resource Tags. If you are not sure if you should apply tags, skip this option
(you can apply tags later) or ask your administrator.
8. Click Create Route Table.
The route table is created and then displayed on the Route Tables page in the
compartment you chose. You can now specify this route table when creating a subnet.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of the cloud networks in the compartment you're viewing is displayed. If you don’t
see the one you're looking for, make sure you’re viewing the correct compartment
(select from the list on the left side of the page).
2. Click the cloud network you're interested in.
3. Click Route Tables.
4. For the route table you want to delete, click the Actions icon ( ), and then click
Terminate.
5. Confirm when prompted.
l ListRouteTables
l GetRouteTable
l UpdateRouteTable
l CreateRouteTable
l DeleteRouteTable
DHCP Options
This topic describes how to manage the Dynamic Host Configuration Protocol (DHCP) options
in a virtual cloud network (VCN).
Domain DNS Type: Internet and VCN resolver. For more information, see DNS in Your
Name Virtual Cloud Network.
Server
Search This option is present in the default set of DHCP options only if you specify a DNS
Domain label for the VCN during creation. In that case, the option's default value is the
VCN domain name (<VCN DNS label>.oraclevcn.com). For more information,
see About the DNS Domains and Hostnames.
Each subnet in a VCN can have a single set of DHCP options associated with it. That set of
options applies to all instances in the subnet. When you create the subnet, you specify which
set to associate with the subnet. If you don't, the default set of DHCP options for the VCN is
used. You can't change which set of DHCP options is associated with a subnet after the subnet
is created. If you don't want to use the default set, make sure to create your desired set of
DHCP options before creating the subnet. However, remember that you can also change the
values for the options.
When creating a new set of DHCP options, you may optionally assign it a friendly name. It
doesn't have to be unique, and you can change it later. Oracle will automatically assign the set
of options a unique identifier called an Oracle Cloud ID (OCID). For more information, see
Resource Identifiers.
Make sure to keep the DHCP client running so you can always
access the instance. If you stop the DHCP client manually or
disable NetworkManager (which stops the DHCP client on
Linux instances), the instance can't renew its DHCP lease and
will become inaccessible when the lease expires (typically
within 24 hours). Do not disable NetworkManager unless you
use another method to ensure renewal of the lease.
Stopping the DHCP client might remove the host route table
when the lease expires. Also, loss of network connectivity to
your iSCSI connections might result in loss of the boot drive.
You can change the values of an individual DHCP option in a set, but notice that when you
update a single option in a set via the API, the new set of options replaces the entire existing
set.
To delete a set of DHCP options, it must not be associated with a subnet yet. You can't delete a
cloud network's default set of DHCP options.
For information about the maximum number of DHCP options allowed, see Service Limits.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
The set of options is created and then displayed on the DHCP Options page of the
compartment you chose. You can now specify this set of options when creating a new subnet.
l ListDhcpOptions
l GetDhcpOptions
l UpdateDhcpOptions
l CreateDhcpOptions
l DeleteDhcpOptions
You can think of an internet gateway as a router connecting the edge of the cloud network with
the internet. Traffic that originates in your VCN and is destined for a public IP address outside
the VCN goes through the internet gateway.
For some simple scenarios that use an internet gateway, see Typical Networking Scenarios.
You create an internet gateway in the context of a specific cloud network. In other words, the
internet gateway is automatically attached to a cloud network. However, you can disable and
re-enable the internet gateway at any time. Compare this with a Dynamic Routing Gateway,
which you create as a standalone object that you then attach to a particular cloud network.
Dynamic Routing Gateways use a different model because they're intended to be modular
building blocks for privately connecting cloud networks to your on-premises network.
For traffic to flow between a subnet and an internet gateway, you must create a route rule
accordingly in the subnet's route table (for example, 0.0.0.0/0 > internet gateway). If the
internet gateway is disabled, that means no traffic will flow to/from the Internet even if
there's a route rule that enables that traffic. For more information, see Route Tables.
For the purposes of access control, you must specify the compartment where you want the
internet gateway to reside. If you're not sure which compartment to use, put the internet
gateway in the same compartment as the cloud network. For more information, see Access
Control.
You may optionally assign a friendly name to the internet gateway. It doesn't have to be
unique, and you can change it later. Oracle will automatically assign the internet gateway a
unique identifier called an Oracle Cloud ID (OCID). For more information, see Resource
Identifiers.
To delete an internet gateway, it does not have to be disabled, but there must not be a route
table that lists it as a target.
An internet gateway is now enabled and working for your cloud network.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
A list of your cloud networks is displayed.
2. Click the cloud network you're interested in.
Its details are displayed.
3. Click Internet Gateways.
4. Click Terminate for the internet gateway.
5. Confirm when prompted.
l ListInternetGateways
l CreateInternetGateway
l GetInternetGatway
l UpdateInternetGateway: You can disable/enable the gateway or change the gateway's
name.
l DeleteInternetGateway
VCN Peering
VCN peering is the process of connecting multiple virtual cloud networks (VCNs). This topic
talks specifically about local VCN peering. In this case, local means that the VCNs reside in
the same region.
You can use VCN peering to divide your network into multiple VCNs (for example, based on
departments or lines of business), with each VCN having direct, private access to the others.
There's no need for traffic to flow through your on-premises network via an IPSec VPN or
FastConnect. You can also place shared resources into a single VCN that all the other VCNs
can access privately.
At a high level, the Networking service components required for a local peering include:
Note: A given VCN can use the peered LPGs to reach only
VNICs in the other VCN, and not destinations outside of the
VCNs (such as the internet or your on-premises network).
For example, if VCN-1 in the preceding diagram were to
have an internet gateway, the instances in VCN-2 could NOT
use it to send traffic to endpoints on the internet. However,
be aware that VCN-2 could receive traffic from the internet
via VCN-1. For more information, see Important
Implications of Peering.
Peering involves two VCNs that might be owned by the same party or two different ones. The
two parties might both be in your company but in different departments.
Peering between two VCNs requires explicit agreement from both parties in the form of Oracle
Cloud Infrastructure Identity and Access Management policies that each party implements for
their own VCN's compartment.
PEERING
A peering is a single peering relationship between two VCNs. Example: If VCN-1 peers
with three other VCNs, then there are three peerings. The local part of local peering
indicates that the VCNs are in the same region. A given VCN can have a maximum of 10
peerings at a time.
VCN ADMINISTRATORS
In general, VCN peering can occur only if both of the two VCN administrators agree to it.
In practice, this means that the two administrators must:
Depending on the situation, a single administrator might be responsible for both VCNs and
the related policies.
For more information about the required policies and VCN configuration, see Setting Up a
Local Peering.
To implement the IAM policies required for peering, the two VCN administrators must
designate one administrator as the requestor and the other as the acceptor. The requestor
must be the one to initiate the request to connect the two LPGs. In turn, the acceptor must
create a particular IAM policy that gives the requestor permission to connect to LPGs in
the acceptor's compartment. Without that policy, the requestor's request to connect fails.
PEERING CONNECTION
When the requestor initiates the request to peer (via the Console or API), they're
effectively asking to connect the two LPGs. This means the requestor must have
information to identify each LPG (such as the LPG's compartment and name).
Either VCN administrator can terminate a peering by deleting their LPG. In that case, the
other LPG's status switches to REVOKED. The administrator could instead render the
connection non-functional by removing the route rules or security lists that enable traffic
to flow across the connection (see the next sections).
As part of configuring the VCNs, each administrator must update the VCN's routing to
enable traffic to flow between the VCNs. In practice, this looks just like routing you set up
for any gateway (such as an internet gateway or dynamic routing gateway). For each
subnet that needs to communicate with the other VCN, you update the subnet's route
table. The route rule specifies the destination traffic's CIDR and your LPG as the target.
Your LPG routes traffic that matches that rule to the other LPG, which in turn routes the
traffic to the next hop in the other VCN.
In the following diagram, VCN-1 and VCN-2 are peered. Traffic from an instance in Subnet
A (10.0.0.15) that is destined for an instance in VCN-2 (192.168.0.15) is routed to LPG-1
based on the rule in Subnet A's route table. From there the traffic is routed to LPG-2, and
then from there, on to its destination in Subnet X.
Each subnet in a VCN has one or more security lists that control traffic in and out of the
subnet's VNICs at the packet level. You can use security lists to control the type of traffic
allowed with the other VCN. As part of configuring the VCNs, each administrator must
determine which subnets in their own VCN need to communicate with VNICs in the other
VCN and update their subnet's security lists accordingly.
l Who in your organization has the authority to establish VCN peerings (for example, see
the IAM policies in Setting Up a Local Peering). Be aware that deletion of these IAM
policies does not affect any existing peerings, only the ability for future peerings to be
created.
l Who can manage route tables and security lists.
Even if a peering connection has been established between your VCN and another, you can
control the packet flow over the connection with route tables in your VCN. For example, you
can restrict traffic to only specific subnets in the other VCN.
Without deleting your LPG and terminating the peering, you can stop traffic flow to the other
VCN by simply removing route rules that direct traffic from your VCN to the other VCN. You
can also effectively stop the traffic by removing any security list rules that enable ingress or
egress traffic with the other VCN. This doesn't stop traffic flowing over the peering
connection, but stops it at the VNIC level.
For more information about the routing and security lists, see the discussions in these
sections:
It's important that each VCN administrator ensure that all outbound and inbound traffic with
the other VCN is intended/expected and well defined. In practice, this means implementing
security list rules that explicitly state the types of traffic your VCN can send to the other and
accept from the other.
In addition to security lists and firewalls, you should evaluate other OS-based configuration on
the instances in your VCN. There could be default configurations that don't apply to your own
VCN's CIDR, but inadvertently apply to the other VCN's CIDR.
If your VCN's subnets use the default security list with the default rules it comes with, be
aware that there are two rules that allow ingress traffic from anywhere (that is, 0.0.0.0/0,
and thus the other VCN):
l Stateful ingress rule that allows TCP port 22 (SSH) traffic from 0.0.0.0/0 and any source
port
l Stateful ingress rule that allows ICMP type 3, code 4 traffic from 0.0.0.0/0 and any
source port
Make sure to evaluate these rules and whether you want to keep or update them. As stated
earlier, you should ensure that all inbound or outbound traffic that you permit is
intended/expected and well defined.
In general, you should prepare your VCN for the ways it could be affected by the other VCN.
For example, the load on your VCN or its instances could increase. Or your VCN could
experience a malicious attack directly from or via the other VCN.
Regarding security risks: You can't necessarily control whether the other VCN is connected to
the internet. If it is, be aware that your VCN can be exposed to bounce attacks in which a
malicious host on the internet can send traffic to your VCN but make it look like it's coming
from the VCN you're peered with. To guard against this, as mentioned earlier, use your
security lists to carefully limit the inbound traffic from the other VCN to expected and well-
defined traffic.
A. Create the LPGs: Each VCN administrator creates an LPG for their own VCN.
B. Share information: The administrators share with each other the basic required
information.
C. Set up the required IAM policies for the connection: The administrators set up
IAM policies to enable the connection to be established.
D. Establish the connection: The requestor connects the two LPGs.
E. Update route tables: Each administrator updates their VCN's route tables to enable
traffic between the peered VCNs as desired.
F. Update security lists: Each administrator updates their VCN's security lists to enable
traffic between the peered VCNs as desired.
G. If desired, the administrators can perform tasks E and F before establishing the
connection. Each administrator needs to know the CIDR block or specific subnets from
the other's VCN and share that in task B. Note that after the connection is established,
you can also get the CIDR block of the other VCN by viewing your own LPG's details in
the Console. Look for Peer Advertised CIDR. Or if you're using the API, see the
peerAdvertisedCidr parameter.
1. In the Console, confirm you're viewing the compartment that contains the VCN that you
want to add the LPG to. If you've just created the VCN, you should still be viewing the
same compartment. If you click Networking and then click Virtual Cloud Networks,
you should see the VCN. For information about compartments and access control, see
Access Control.
2. On the Virtual Cloud Networks page, click the VCN you're interested in.
3. Click Create Local Peering Gateway.
The requestor is in an IAM group called RequestorGrp. This policy lets anyone in the
group initiate a connection from any LPG in the requestor's compartment
(RequestorComp). Policy R can be attached to either the tenancy (root compartment) or
to RequestorComp. For information about why you would attach it to one versus the
other, see Policy Attachment.
l Policy A (implemented by the acceptor):
Allow group RequestorGrp to manage local-peering-to in compartment AcceptorComp
The first statement in the policy lets the requestor connect to any LPG in the acceptor's
compartment (AcceptorComp). This statement reflects the required agreement from the
acceptor for the peering to be established. Policy A can be attached to either the
tenancy (root compartment) or to AcceptorComp.
Both Policy R and Policy A give RequestorGrp access. However, Policy R has a resource-type
called local-peering-from, and Policy A has a resource-type called local-peering-to. Together,
these policies let someone in RequestorGrp establish the connection from an LPG in the
requestor's compartment to an LPG in the acceptor's compartment. The API call to actually
create the connection specifies which two LPGs.
1. In the Console, view the details for the requestor LPG that you want to connect to the
acceptor LPG.
2. Click Establish Connection.
The resulting dialog box lets you choose the VCN and LPG you want to peer with. They
each might be in a different compartment than the one you're currently working in.
3. Select values for the following:
l Virtual Cloud Network Compartment: The compartment that contains the
VCN you want to peer with.
l Virtual Cloud Network: The VCN you want to peer with.
l Local Peering Gateway Compartment: The compartment that contains the
LPG you want to establish the connection with.
l Unpeered Peer Gateway: The LPG you want to establish the connection with.
4. Click Establish Peering Connection.
At this point, the details of each LPG update to show the Peer VCN CIDR Block for the other
VCN. This is the CIDR of the other VCN across the peering from the LPG. Each administrator
can use this information to update the route tables and security lists for their own VCN.
Prerequisite: Each administrator must have the CIDR block or specific subnets for the other
VCN. If the connection is already established, look at the Peer VCN CIDR Block for your
LPG in the Console. Otherwise, get the information from the other administrator via email or
other method.
1. Determine which subnets in your VCN need to communicate with the other VCN.
2. Update the route table for each of those subnets to include a new rule that directs traffic
destined for the other VCN's CIDR to your LPG:
a. In the Console, click Networking, and then click Virtual Cloud Networks.
b. Click the VCN you're interested in.
c. Click Route Tables.
d. Click the route table you're interested in.
e. Click Edit Route Rules.
f. Click + Another Route Rule and enter the following:
l Destination CIDR Block: The otht, you can specify a subnet or particular
subset of the peered VCN's CIDR.
l Target Type: Local Peering Gateway.
l Compartment: The compartment where the LPG is located.
Any subnet traffic with a destination that matches the rule is routed to your LPG. For more
information about setting up route rules, see Route Tables.
Later, if you no longer need the peering and want to delete your LPG, you must first delete all
the route rules in your VCN that specify the LPG as the target.
Prerequisite: Each administrator must have the CIDR block or specific subnets for the other
VCN. In general, you should use the same CIDR block you used in the route table rule in Task
E: Configure the route tables.
l Ingress rules for the types of traffic you want to allow from the other VCN, specifically
from the VCN's CIDR or specific subnets.
l Egress rule to allow outgoing traffic from your VCN to the other VCN. If the subnet
already has a broad egress rule for all types of protocols to all destinations (0.0.0.0/0),
then you don't need to add a special one for the other VCN.
l Ingress rule for ICMP type 3, code 4, specifically from the VCN's CIDR or specific
subnets. If you already have a broad rule for this traffic (as is recommended and
included in the default security list), you don't need to add a special one for the other
VCN.
1. Determine which subnets in your VCN need to communicate with the other VCN.
2. Update the security list for each of those subnets to include rules to allow the desired
egress or ingress traffic specifically with the CIDR block or subnet of the other VCN:
a. In the Console, while viewing the VCN you're interested in, click Security Lists.
b. Click the security list you're interested in.
c. Click Edit All Rules and create one or more rules, each for the specific type of
traffic you want to allow. See the example that follows for more details.
d. Click Save Security List Rules at the bottom of the dialog box.
Example: Let's say you want to add a stateful rule that enables ingress HTTPS (port 443)
traffic from the other VCN's CIDR. Here are the basic steps you take when adding a rule:
For more information about setting up security list rules, see Security Lists.
1. In the Console, click Networking, and then click Virtual Cloud Networks.
2. Click the VCN you're interested in.
3. Click Local Peering Gateways.
4. For the LPG you want to delete, click the Actions icon ( ), and then click Terminate.
5. Confirm when prompted.
l ListLocalPeeringGateways
l CreateLocalPeeringGateway
l GetLocalPeeringGateway
l UpdateLocalPeeringGateway
l DeleteLocalPeeringGateway
l ConnectLocalPeeringGateways
You use a DRG when connecting your existing on-premises network to your cloud network
with one (or both) of these:
l IPSec VPN
l Oracle Cloud Infrastructure FastConnect
For the purposes of access control, when creating a DRG, you must specify the compartment
where you want the DRG to reside. If you're not sure which compartment to use, put the DRG
in the same compartment as the cloud network. For more information, see Access Control.
You may optionally assign a friendly name to the DRG . It doesn't have to be unique, and you
can change it later. Oracle automatically assigns the DRG a unique identifier called an Oracle
Cloud ID (OCID). For more information, see Resource Identifiers.
A DRG is a standalone object. To use it, you must attach it to a cloud network. A cloud network
can be attached to only one DRG at a time, and vice versa. You can detach a DRG and reattach
it at any time. In the API, the process of attaching creates a DrgAttachment object with its
own OCID. To detach the DRG, you delete that attachment object.
After attaching a DRG, you must update the routing in your cloud network to use the DRG.
Otherwise, traffic from your cloud network will not flow to the DRG. For more information,
see Route Tables.
To delete a DRG, it must not be attached to a cloud network or be connected to your on-
premises network via an IPSec VPN or Oracle Cloud Infrastructure FastConnect. Also, there
must not be a route table that lists that DRG as a target.
For information about the number of DRGs you can have, see Service Limits.
The resource is created and then displayed on the Dynamic Routing Gateways page of the
compartment you chose. It will be in the "Provisioning" state for a short period. You can
connect it to other parts of your network only after provisioning is complete.
The following instructions have you navigate to the DRG and then choose which cloud network
to attach. You could instead navigate to the cloud network and then choose which DRG to
attach.
1. Open the Console, click Networking, and then click Dynamic Routing Gateways.
A list of the DRGs in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
The attachment will be in the "Attaching" state for a short period before it's ready.
After it's ready, make sure to create a route rule that directs traffic to this DRG. See To add a
route rule for a dynamic routing gateway.
If all non-intra-VCN traffic that's not covered by another rule in the table must be routed to
the DRG, then this is the new rule to add:
l Destination CIDR Block = 0.0.0.0/0. If you want to limit the rule to a specific
network (for example, your on-premises network), then use that network's CIDR
instead of 0.0.0.0/0.
l Target Type: Dynamic Routing Gateway.
l Target Compartment: The compartment where the DRG resides.
l Target = the DRG.
1. Open the Console, click Networking, and then click Dynamic Routing Gateways.
A list of the DRGs in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
2. Click the DRG you want to detach.
3. On the left side of the page, click Virtual Cloud Networks to see the cloud network
the DRG is attached to. If the cloud network is in a different compartment than the one
you're working in, choose that compartment from the list on the left side of the page.
4. Click the Actions icon ( ), and then click Detach.
1. Open the Console, click Networking, and then click Dynamic Routing Gateways.
A list of the DRGs in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
2. Click Delete for the DRG you want to delete.
3. Confirm when prompted.
The DRG will be in the "Terminating" state for a short period while it's being deleted.
l ListDrgs
l CreateDrg
l GetDrg
l UpdateDrg
l DeleteDrg
l ListDrgAttachments
l CreateDrgAttachment: This attaches a DRG to a VCN and results in a DrgAttachment
object with its own OCID.
l GetDrgAttachment
l UpdateDrgAttachment: You can update only the name of the DrgAttachment.
l DeleteDrgAttachment: This detaches a DRG from a VCN by deleting the DrgAttachment.
IPSec VPNs
This topic describes how to set up and manage an IPSec VPN connection between your on-
premises network and virtual cloud network (VCN).
For scenarios that include an IPSec VPN, see Scenario B: Private Subnets with a VPN and
Scenario C: Public and Private Subnets with a VPN.
Overview
One of the choices for connectivity between your on-premises network and your VCN is an
IPSec VPN connection. It consists of multiple redundant IPSec tunnels that use static routes to
route traffic. Border Gateway Protocol (BGP) is not supported for the IPSec VPN.
When setting up an IPSec VPN for your VCN, there are several Networking components that
you must create. See the following diagram and description of the components. You can
create the components with either the Console or the API.
CPE Object
At your end of the IPSec VPN is the actual router in your on-premises network (whether
hardware or software). The term customer-premises equipment (CPE) is commonly used in
some industries to refer to this type of on-premises equipment. When setting up the VPN, you
must create a virtual representation of the router. Oracle calls it a CPE, but this
documentation typically uses the term CPE object to help distinguish the virtual representation
you create from the actual on-premises router. The CPE object contains basic information
about your router that Oracle needs.
At Oracle's end of the IPSec VPN is a virtual router called a dynamic routing gateway, which is
the gateway into your VCN from your on-premises network. Whether you're using an IPSec
VPN or Oracle Cloud Infrastructure FastConnect virtual circuits to connect your on-premises
network and VCN, the traffic goes through the DRG. For more information, see Dynamic
Routing Gateways (DRGs).
A network administrator might think of the DRG as the VPN headend. After creating a DRG,
you must attach it to your VCN, using either the Console or API. You must also add one or
more route rules that route the desired traffic from the VCN to the DRG. Without that DRG
attachment and the route rule(s), traffic will not flow between your VCN and on-premises
network. At any time, you can detach the DRG from your VCN but maintain all the remaining
VPN components. You could then reattach the DRG again, or attach it to another VCN.
IPSec Connection
After creating the CPE object and DRG, you connect them by creating an IPSec connection,
which results in multiple redundant IPSec tunnels. Oracle recommends that you configure
your on-premises router to support all the tunnels in case one fails or Oracle takes one down
for maintenance. Each tunnel has configuration information that your network administrator
needs when configuring your on-premises router (an IP address and secret key).
Important: After you set up the IPSec VPN, you can't edit or
expand the list of static routes associated with the tunnels.
For a proof of concept (POC): If you're just doing a simple POC with a single on-premises
router, then having only a single static route of 0.0.0.0/0 is sufficient. See Example: Setting
Up a Proof of Concept IPSec VPN.
For a production network: Because you can't edit or expand the list of static routes
associated with the tunnels, it's wise to include a 0.0.0.0/0 static route in the list when you
create your IPSec connection. That way you can later change or expand your on-premises
network without touching your existing IPSec VPN. Instead, you only need to update the VCN's
route rules, which you can do at any time. The 0.0.0.0/0 static route can be in lieu of or in
addition to a static route for your overall on-premises network's CIDR (or a static route for
each subnet that needs to communicate with your VCN). See Example Layout with Multiple
Geographic Areas.
For port address translation (PAT): If you're doing PAT between your on-premises router
and VCN, the static route for the IPSec connection is the PAT IP address. See Example Layout
with PAT.
l Oracle advertises a route for each of your VCN’s subnets over the FastConnect virtual
circuit BGP session.
l Oracle overrides the default route selection behavior to prefer BGP routes over static
routes if a static route overlaps with a route advertised by your on-premises network.
Question Answer
What is the public IP address of your on-premises router? Will you have multiple
routers for redundancy (if so, get the IP address for each)?
Will you be doing port address translation (PAT) between each on-premises
router and your VCN?
What are the static routes for the IPSec connection? See About the Static Routes
.
Also draw your own diagram of your network layout similar to the one in the preceding
section. Think about which parts of your on-premises network need to communicate with your
VCN, and the reverse. Map out the routing and security lists that you need.
Overall Process
Here's the overall process for setting up an IPSec VPN:
f. From your DRG, create an IPSec connection to the CPE object and provide your
static route(s).
3. Have your network administrator configure your CPE router: Your network
administrator must configure your on-premises router with information Oracle provided
during the previous steps. There is general information about the VCN, and specific
information for each IPSec tunnel. This is the only part of the setup that you can't
execute using the Console or API. Without the configuration, traffic will not flow
between your VCN and on-premises network. For more information, see Configuring
Your On-Premises Router for an IPSec VPN.
Question Answer
What is the public IP address of your on-premises router? Will you have 142.34.145.37
multiple routers for redundancy (if so, get the IP address for each)?
Will you be doing port address translation (PAT) between each on- No
premises router and your VCN?
What are the static routes for the IPSec connection? See About the Static Only 0.0.0.0/0
Routes . for a simple
POC.
The VCN is created and displayed on the page. Make sure it is done being provisioned before
continuing.
The DRG will be in the "Provisioning" state for a short period. Make sure it is done being
provisioned before continuing.
The attachment will be in the "Attaching" state for a short period before it's ready.
Step 2d: Update the routing in your VCN to use the DRG
1. Click Networking, click Virtual Cloud Networks, and then click your VCN.
2. Click Route Tables to see a list of the route tables. For each subnet that needs to
communicate with your on-premises network, update that subnet's route table with a
new route for the DRG:
a. For a given route table (the default route table in this example), click Edit Route
Rules.
b. Click + Another Route Rule.
c. Enter the following:
l Destination CIDR: The CIDR for your on-premises network (10.0.0.0/16
in this example).
l Target Type: Dynamic Routing Gateway.
l Compartment: Leave as is.
l Target: The DRG you created earlier.
d. Click Save.
The route table now directs traffic destined for in your on-premises network to the DRG.
You could also use this DRG as the gateway for Oracle Cloud
Infrastructure FastConnect, which is an alternative way to
connect your on-premises network to your VCN.
Step 2e: Create a CPE object and provide your router's public IP address
The CPE object will be in the "Provisioning" state for a short period.
Step 2f: From your DRG, create an IPSec connection to the CPE object and
provide your static routes
7. Copy the information for each of the tunnels into an email or other location so you can
deliver it to the network administrator who will configure the on-premises router.
For more information, see Configuring Your On-Premises Router for an IPSec VPN. You
can view this tunnel information here in the Console at any time.
8. Click Close.
You have now created all the components required for the IPSec VPN. But your network
administrator must configure the on-premises router before network traffic can flow between
your on-premises network and VCN.
l Two networks in separate geographical areas that each connect to your VCN
l A single on-premises router in each area
l Two IPSec VPNs (one for each on-premises router)
Notice that each IPSec VPN has two static routes associated with it: one for the CIDR of the
particular geographical area, and a broad 0.0.0.0/0 static route. To understand why, see
About the Static Routes .
Here are a few examples of where the 0.0.0.0/0 static route can provide flexibility:
l Let's say that the CPE 1 router goes down. Assuming Subnet 1 and Subnet 2 can
communicate with each other, your VCN could still reach the systems in Subnet 1
because of the 0.0.0.0/0 static route that goes to CPE 2.
l Or, let's say that your organization adds a new geographical area with Subnet 3 and
initially just connects it to Subnet 2. If you added a route rule to your VCN's route table
for Subnet 3, the VCN could reach systems in Subnet 3 because of the 0.0.0.0/0 static
route that goes to CPE 2.
l Two networks in separate geographical areas that each connect to your VCN
l Redundant on-premises routers (two in each geographical area)
l Four IPSec VPNs (one for each on-premises router)
l PAT for each on-premises router
When you create each of the four IPSec connections, the static route you specify is the PAT IP
address for the specific on-premises router. When you set up the route rules for the VCN, you
specify a rule for each PAT IP address (or an aggregate CIDR that covers them all) with your
DRG as the rule's target.
Access Control
For the purposes of access control, when you set up the IPSec VPN, you must specify the
compartment where you want each of the components to reside. If you're not sure which
compartment to use, put all the components in the same compartment as the VCN.
The default administrator for your Oracle Cloud Infrastructure tenancy or any other users in
the Administrators group have the access required to set up the IPSec VPN.
For information about compartments and restricting access to your networking components,
see Access Control.
You may optionally assign a friendly name to each of the components when you create them.
These names don't have to be unique, although it's a best practice to use unique names across
your tenancy. Oracle automatically assigns each component a unique identifier called an
Oracle Cloud ID (OCID). For more information, see Resource Identifiers.
When you successfully create the IPSec connection, Oracle produces important configuration
information for each of the resulting IPSec tunnels. If you're using the Console, see To get the
status and configuration information for the IPSec tunnels. If you're using the API, call
GetIPSecConnectionDeviceConfig. You need to give that information to a network
administrator to configure the on-premises router. The information consists of the IP address
of the DRG (the VPN headend) and the shared secret (also called the pre-shared key). Traffic
cannot flow across the IPSec connection without that configuration. For more information, see
Configuring Your On-Premises Router for an IPSec VPN.
To get the status and configuration information for the IPSec tunnels
1. Open the Console, click Networking, and then click Dynamic Routing Gateways.
A list of the DRGs in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
2. Click the DRG that the IPSec tunnels are connected to.
The gateway's details are displayed, including the IPSec connection itself.
3. Click the Actions icon ( ), and then click Tunnel Information.
The configuration information for each tunnel is displayed (the IP address of the VPN headend
and the shared secret). Also, the tunnel's status is displayed (either "Available" or "Down").
See the example in Step 2f: From your DRG, create an IPSec connection to the CPE object and
provide your static routes
If you want to disable the IPSec VPN between your on-premises network and VCN, simply
detach the DRG from the VCN instead of deleting the IPSec connection. If you were to delete
the IPSec connection and then later want to re-establish it, your network administrator would
have to configure your on-premises router again with a new set of configuration information
from Oracle. Keep in mind that you have a single DRG connected to your VCN, and any other
connections to the DRG will also be disrupted if you detach the DRG.
If you want to tear down the entire IPSec VPN, you must first terminate the IPSec connection.
Then you can delete the CPE object. If you're not using the DRG for another connection to your
on-premises network, you can detach it from the VCN and then delete it.
The object will be in the "Terminating" state for a short period while it's being deleted.
The IPSec connection will be in the "Terminating" state for a short period while it's being
deleted.
l ListVcns
l CreateVcn
l GetVcn
l UpdateVcn
l DeleteVcn
l ListDrgs
l CreateDrg
l GetDrg
l UpdateDrg: You can update only the DRG's name.
l DeleteDrg
l ListDrgAttachments
l CreateDrgAttachment: This attaches a DRG to a VCN and results in a DrgAttachment
object with its own OCID.
l GetDrgAttachment
l UpdateDrgAttachment: You can update only the name of the DrgAttachment.
l DeleteDrgAttachment: This detaches a DRG from a VCN by deleting the DrgAttachment.
l ListCpes
l CreateCpe
l GetCpe
l UpdateCpe: You can update only the CPE object's name.
l DeleteCpe
l ListIPSecConnections
l CreateIPSecConnection: Returns the configuration information for each tunnel, including
the IP address of the DRG (the VPN headend) and the shared secret. See Configuring
Your On-Premises Router for an IPSec VPN.
l GetIPSecConnection
l UpdateIPSecConnection
l DeleteIPSecConnection
l GetIPSecConnectionDeviceStatus: Use this to determine whether the IPSec tunnels are
up or down.
l GetIPSecConnectionDeviceConfig: Use this to get the configuration information for each
tunnel.
The following figure shows the basic layout of the IPSec VPN connection.
Asymmetric Routing
Oracle uses asymmetric routing across the multiple tunnels that make up the IPSec VPN
connection. Even if you configure one tunnel as primary and another as backup, traffic from
your VCN to your on-premises network can use any tunnel that is "up" on your device. Make
sure to configure your firewalls accordingly. Otherwise, ping tests or application traffic across
the connection will not reliably work.
You or someone in your organization must have already used Networking to create a VCN and
an IPSec connection, which consists of multiple IPSec tunnels for redundancy. You must
gather the following information about those components:
l VCN ID: The VCN ID has a UUID at the end. You can use this UUID, or any other string
that helps you identify this VCN in the device configuration and doesn't conflict with
other object-group or access-list names.
l VCN CIDR
l VCN CIDR subnet mask
l For each IPSec tunnel:
o The IP address of the Oracle IPSec tunnel endpoint (the VPN headend)
o The pre-shared key (PSK)
You also need some basic information about the inside and outside interfaces of your on-
premises router. For more information, see the configuration topic for your type of router.
You can get the status of the IPSec tunnels in the API or Console. For instructions, see To get
the status and configuration information for the IPSec tunnels.
Device Configurations
l Generic CPE Configuration Information
l Check Point
l Cisco ASA: Policy-Based Configuration
l Cisco ASA: Route-Based Configuration
l Cisco IOS
l Fortigate
l Juniper MX
l Juniper SRX
l Palo Alto
The Oracle Cloud Infrastructure VPN headends use next-hop-based tunnels. When you create
a new IPSec connection, you specify a list of IPv4 networks that should be routed from your
dynamic routing gateway (DRG) through the IPSec tunnel to your CPE.
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
Check Point
This section includes two different sets of instructions: for domain-based tunnel configuration
and VPN tunnel interface (VTI) configuration.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicInterface}
l The name of the Interface where the CPE's public IP address is configured.
l Example Value: eth1
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/16
l These are the base address and netmask for the ${VcnCidrBlock}
l For more information, see: Wikipedia reference for finding CidrNetmask
l Values based on the example ${VcnCidrBlock} shown above:
o ${VcnCidrNetwork}: 10.0.0.0
o ${VcnCidrNetmask}: 255.255.0.0
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
In this device’s Topology tab, under the VPN domain, select "manually defined", and
then select the group that you created in the previous steps that represents the remote
side (Oracle/VPC side) domain.
l Open your local device and go to the topology tab. Under the VPN domain, select
"manually defined", and then select the group that you created in the previous steps
that represents your internal domain.
l Open IPsec VPN tab "Communities" and create new "Star Community".
o Add your gateway or cluster as the Center Gateway.
o Add new Interoperable Devices as Satellite Gateways.
l Open the "Encryption" tab > Encryption Method, select "IKEv1" and under Encryption
Suite select "Custom". Select the following parameters for custom encryption suite:
o ISAKMP Phase 1: IKE SA
n AES-256
n SHA-384
o ISAKMP Phase 2: IPSec SA
n AES-256
n SHA1
l Under "Tunnel Management", under "VPN Tunnel Sharing", select "One VPN tunnel per
Gateway pair".
l Create Firewall Rules: Open Global Properties > VPN > Advanced.
o For any firewall rule that matches VPN traffic, add match rules for each direction.
l Under "Network Interfaces" tab, create one new "VPN Tunnel" interface per IPSec
connection tunnel.
o VPN Tunnel ID: Select a unique number
o Peer: Unique name for tunnel
o VPN Tunnel Type: Unnumbered
o Physical Device: ${cpePublicInterface}
l Under the "IPv4 Static Routes" tab, define VPN static routes, one per tunnel.
o Destination: ${VcnCidrNetwork}
o Subnet Mask: ${VcnCidrNetmask}
o Gateway: Interface - One per VPN tunnel
tunnel.
o Name: Unique name per tunnel
o IPv4 Address: ${ipAddress#}
l Open your gateway/cluster and navigate to the "Topology" tab, choose "Manually
defined", and select new Simple Group.
l Open IPsec VPN tab "Communities" and create new "Star Community".
o Add your gateway or cluster as the Center Gateway.
o Add new Interoperable Devices as Satellite Gateways.
l Open the "Encryption" tab > Encryption Method, select "IKEv1" and under Encryption
Suite select "Custom". Select the following parameters for custom encryption suite:
o ISAKMP Phase 1: IKE SA
n AES-256
n SHA-384
o ISAKMP Phase 2: IPSec SA
n AES-256
n SHA1
l Under "Tunnel Management", under "VPN Tunnel Sharing", select "One VPN tunnel per
Gateway pair".
l Create Firewall Rules: Open Global Properties > VPN > Advanced.
o For any firewall rule that matches VPN traffic, add match rules for each direction.
The following policy-based configuration was validated using a Cisco ASA 5505.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
This is the IP address of your outside interface.
l Example Value: 1.2.3.4
${vcnID}
l A UUID string - used to uniquely name some access-lists and object-groups (can also
use any other string that does not create a name that conflicts with an existing object-
group or access-list).
l Example: oracle-vcn-1
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
l These are the base address and netmask for the ${VcnCidrBlock}
l For more information, see: Wikipedia reference for finding CidrNetmask
l Values based on the example ${VcnCidrBlock} shown above:
o ${VcnCidrNetwork}: 10.0.0.0
o ${VcnCidrNetmask}: 255.255.0.0
l In order to disable NAT between your VCN and your on-premises network, you need to
define the source IP addresses for packets going through your CPE into the IPSec
tunnels.
l Example Value: [ (10.0.0.0, 255.0.0.0), (172.16.0.0, 255.240.0.0), (192.168.0.0,
255.255.0.0) ]
l These are the interfaces that face the inside and outside of your CPE.
l The outside interface should be able to ping the Oracle VPN headend IPs.
l The inside interface is the one facing your customer premise infrastructure.
l Values based on Sample Router Config above:
o ${insideInterface}: Vlan100
o ${outsideInterface}: Vlan101
l These are the "nameif" values for the inside and outside interfaces.
l Values based on Sample Router Config above:
o ${insideInterfaceName}: inside
o ${outsideInterfaceName}: outside
l You likely also have access-lists configured to control traffic in and out of your inside
and outside interfaces.
l Values based on Sample Router Config above:
o ${inboundAclName}: inbound-acl
o ${outboundAclName}: outbound-acl
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
${vcnID} Console/API/User 1
Encryption AES-256-cbc
Encryption AES-256-cbc
CPE Configuration
The ASA uses configuration objects to identify IP networks that are used in mulitple locations.
This object group is used by the IPSec policies to understand what IP addresses belong to your
Oracle VCN, so that they can be encrypted and sent inside the correct IPSec tunnel.
object network oracle-vcn-${vcnID}
subnet ${VcnCidrNetwork} ${VcnCidrNetmask}
This object may already be present on your ASA. A common name would match the interface
name of your "inside" interface. You might have multiple "subnet" entries in this object-group.
One for each aggregagte subnet you want to allow this IPSec tunnel to be used for traffic to
and from your Oracle VCN.
object network ${insideInterfaceName}
subnet ${custCIDR} ${custMask}
If you are using NAT for traffic between your inside and outside interfaces, you might need to
disable NAT for traffic between your on-premises network and the Oracle VCN.
nat (${insideInterfaceName},${outsideInterfaceName}) source static ${insideInterfaceName}
${insideInterfaceName} destination static oracle-vcn-${vcnID} oracle-vcn-${vcnID}
Assuming there is an access-list controlling traffic in and out of your Internet facing interface,
you will need to permit traffic between your CPE and the Oracle VPN headend.
WARNING: The new ACL entry you add to permit the traffic between your CPE and VPN
headend needs to be above any deny statements you might have in your existing access-list:
access-list ${outboundAclName} extended permit ip host ${ipAddress1} host ${cpePublicIpAddress}
access-list ${outboundAclName} extended permit ip host ${ipAddress2} host ${cpePublicIpAddress}
access-list ${outboundAclName} extended permit ip host ${ipAddress3} host ${cpePublicIpAddress}
NOTE: You can be more restrictive in your ACL if you wish by only permitting the following:
The following access list "orcl-acl" will be associated with the IPSec security association using
the "crypto-map" command.
access-list ${cryptoMapAclName} extended permit ip any ${VcnCidrNetwork} ${VcnCidrNetmask}
The following ACL will be used in the VPN filter to restrict the actual traffic that will be
permitted through the tunnels. See Base VPN Policy Configuration for details.
access-list ${vpnFilterAclName} extended permit ip ${VcnCidrNetwork} ${VcnCidrNetmask} ${custCIDR}
${custMask}
authentication pre-share
encryption aes-256
hash sha
group 5
lifetime 28800
This configuration sets the base values for the IPSec tunnels.
It also restricts what traffic will be allowed over the tunnels via the "vpn filter" command. By
default all traffic is denied by the filter.
group-policy oracle-vcn-vpn-policy internal
group-policy oracle-vcn-vpn-policy attributes
vpn-idle-timeout none
vpn-session-timeout none
vpn-tunnel-protocol ikev1
vpn-filter value ${vpnFilterAclName}
l group-policy
l VPN filter
IPS EC CONFIGURATION
WARNING: Make sure your crypto map sequence numbers do not overlap with existing
crypto maps.
crypto ipsec ikev1 transform-set oracle-vcn-transform esp-aes-256 esp-sha-hmac
crypto map oracle-${ipAddress1}-map-v1 1 match address ${cryptoMapAclName}
crypto map oracle-${ipAddress1}-map-v1 1 set pfs group5
crypto map oracle-${ipAddress1}-map-v1 1 set connection-type originate-only
crypto map oracle-${ipAddress1}-map-v1 1 set peer ${ipAddress1}
crypto map oracle-${ipAddress1}-map-v1 1 set ikev1 transform-set oracle-vcn-transform
crypto map oracle-${ipAddress1}-map-v1 1 set security-association lifetime seconds 300
crypto map oracle-${ipAddress1}-map-v1 interface outside
This configuration matches the group policy with an Oracle VPN headend endpoint.
tunnel-group ${ipAddress1} type ipsec-l2l
tunnel-group ${ipAddress1} general-attributes
default-group-policy oracle-vcn-vpn-policy
tunnel-group ${ipAddress1} ipsec-attributes
ikev1 pre-shared-key ${sharedSecret1}
The Cisco ASA device doesn't establish a tunnel if there's no interesting traffic trying to pass
through the tunnel. You must configure SLA monitoring on your device so that the tunnel
remains up at all times. You must allow ICMP on the outside interface. Make sure that the SLA
monitor number is unique.
sla monitor 1
type echo protocol ipIcmpEcho ${VcnHostIp} interface outside
frequency 5
sla monitor schedule 1 life forever start-time now
icmp permit any ${outsideInterface}
OPTIONAL CONFIG W HERE VPN TRAFFIC MIGHT ENTER ONE TUNNEL AND EXIT ANOTHER
If the VPN traffic enters an interface with the same security level as an interface towards the
packets next-hop, you will need to allow that traffic. By default, the packets between
interfaces with identical security levels on your ASA will be dropped.
Because the maximum transmission unit (packet size) through the IPSec tunnel is less than
1500 bytes, we need to either fragment packets that are too big to fit through the tunnel or we
need to signal back to the hosts communicating through the tunnel that they need to send
smaller packets.
You can configure the Cisco ASA to change the maximum segment size (MSS) for any new TCP
flows through the tunnel. The ASA will look at any TCP packets where the SYN flag is set and
change the MSS value to the configured value. This might help new TCP flows avoid having to
use path maximum transmission unit discovery (pmtud).
sysopt connection tcpmss 1387
References:
l RFC 1191
l Relevant Cisco reference documentation
Path MTU Discovery requires that all TCP packets have the Don't Fragment (DF) bit set. When
the packet arrives on the ASA, if it is too big to go through the tunnel and the DF bit is set, the
ASA will drop the packet and send an ICMP packet back to the sender indicating that the
packet was too big to fit through the tunnel. There are three options on the ASA for how to
handle the DF bit (pick one of the options):
l Set the DF bit: If the DF bit is not already set and the packet is too big, will set the DF
bit, causing the ASA to drop the the packet and send an ICMP error message back to the
sender. (Recommended)
crypto ipsec df-bit set-df ${outsideInterfaceName}
l Clear the DF bit: Will allow the ASA to fragment the packet and send it to the end host in
Oracle Cloud Infrastructure to reassemble the packet.
crypto ipsec df-bit clear-df ${outsideInterfaceName}
l Ignore (copy) the DF bit: If the packet is too big, and the DF bit is set, will drop the
packet and send error message to sender, if the DF bit is not set, will fragment the
packet and send to Oracle Cloud Infrastructure.
crypto ipsec df-bit copy-df ${outsideInterfaceName}
References:
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
This is the IP address of your outside interface.
l Example Value: 1.2.3.4
${vcnID}
l A UUID string - used to uniquely name some access-lists and object-groups (can also
use any other string that does not create a name that conflicts with an existing object-
group or access-list).
l Example: oracle-vcn-1
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
l These are the base address and netmask for the ${VcnCidrBlock}
l For more information, see: Wikipedia reference for finding CidrNetmask
l Values based on the example ${VcnCidrBlock} shown above:
o ${VcnCidrNetwork}: 10.0.0.0
o ${VcnCidrNetmask}: 255.255.0.0
l In order to disable NAT between your VCN and your on-premises network, you need to
define the source IP addresses for packets going through your CPE into the IPSec
tunnels.
l Example Value: [ (10.0.0.0, 255.0.0.0), (172.16.0.0, 255.240.0.0), (192.168.0.0,
255.255.0.0) ]
${outsideInterface}
${internalIpAddress}
l The IP address for the tunnel interface. You have one per tunnel you configure (for
example, ${internalIpAddress1}, ${internalIpAddress2}, and so on).
l The address can be any one that is not used in your network.
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
${vcnID} Console/API/User 1
Encryption AES-256-cbc
Encryption AES-256-cbc
CPE Configuration
interface ${tunnelInterfaceName1}
nameif ORACLE-VPN1
ip address ${internalIpAddress1} 255.255.255.252
tunnel source interface ${outsideInterfaceName}
tunnel destination ${ipAddress1}
tunnel mode ipsec ipv4
tunnel protection ipsec profile oracle-vcn-vpn-policy
interface ${tunnelInterfaceName2}
nameif ORACLE-VPN2
ip address ${internalIpAddress2} 255.255.255.252
tunnel source interface ${outsideInterfaceName}
tunnel destination ${ipAddress2}
tunnel mode ipsec ipv4
tunnel protection ipsec profile oracle-vcn-vpn-policy
sla monitor 10
type echo protocol ipIcmpEcho ${ipAddress1}interface outside
frequency 5
sla monitor schedule 10 life forever start-time now
Cisco IOS
This configuration was validated using a Cisco 2921 running Cisco IOS Version 15.4(3)M3.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
l The Cisco IOS uses the Tunnel interfaces for "Virtual Tunnel Interfaces" (vti) IPSec
tunnels.
l Command to find value: show run | inc interface Tunnel
l You will need one unused unit number per tunnel.
l Example Values: 1, 2, 3
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
${tunnelNumber1} User 1
${tunnelNumber2} User 2
${tunnelNumber3} User 3
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
CPE Configuration
interface Tunnel${tunnelNumber1}
ip tcp adjust-mss 1387
tunnel source ${cpePublicIpAddress}
tunnel mode ipsec ipv4
tunnel destination ${ipAddress1}
tunnel protection ipsec profile oracle_vpn_${ipAddress1}
!
interface Tunnel${tunnelNumber2}
tunnel source ${cpePublicIpAddress}
tunnel mode ipsec ipv4
tunnel destination ${ipAddress2}
tunnel protection ipsec profile oracle_vpn_${ipAddress2}
!
interface Tunnel${tunnelNumber3}
tunnel source ${cpePublicIpAddress}
tunnel mode ipsec ipv4
tunnel destination ${ipAddress3}
tunnel protection ipsec profile oracle_vpn_${ipAddress3}
UPDATE ANY I NTERNET FACING ACCESS LIST TO ALLOW IPS EC AND ISAKMP PACKETS
In order for the tunnels to come up, you need to open up udp port 500 and protocol esp to the
interface with your CPE Public IP Address. Example:
interface GigabitEthernet0/1
description INTERNET
ip address ${cpePublicIpAddress} 255.255.255.252
ip access-group INTERNET-INGRESS in
duplex auto
speed auto
!
The IPSec tunnels require static routes to get traffic to go through them. The routes are
configured to point down the tunnel interfaces. If the IPSec tunnel is down, the Cisco router
will stop using that route. You should redistribute these routes into your customer premise
network.
ip route ${vcnCidrBlock} 255.0.0.0 Tunnel${tunnelNumber1}
ip route ${vcnCidrBlock} 255.0.0.0 Tunnel${tunnelNumber2}
ip route ${vcnCidrBlock} 255.0.0.0 Tunnel${tunnelNumber3}
Fortigate
This configuration was validated using a Fortigate VM running v5.4.1,build1064 (GA).
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
${cpePublicInterface}
l The name of the Fortigate interface where the CPE IP address is configured.
l Example value: ge-0/0/1.0
${policyId#}
${vcnID}
l A UUID string - used to uniquely name some access-lists and object-groups (can also
use any other string that does not create a name that conflicts with an existing object-
group or access-list).
l Example: oracle-vcn-1
l These are the base address and netmask of your VCN Cidr Block
l For more information, see: Wikipedia reference for finding CidrNetmask
l Values based on the example ${VcnCidrBlock} shown above:
o ${VcnCidrNetwork}: 10.0.0.0
o ${VcnCidrNetmask}: 255.255.0.0
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
CPE Configuration
FIREWALL CONFIGURATION
The configuration example below would allow all traffic from your VCN to any host on your
network.
config firewall address
edit any_ipv4
next
edit OracleVcn-${VcnId}_remote_subnet
set subnet ${VcnCidrNetwork} ${VcnCidrNetmask}
next
end
next
edit ${ipAddress3}
set interface ${cpePublicInterface}
set keylife 28800
set proposal aes256-sha384 aes256-sha256
set comments "VPN: Oracle ${ipAddress3}"
set dhgrp 5
set remote-gw ${ipAddress3}
set psksecret ${sharedSecret3}
next
end
Juniper MX
This configuration was validated using a Juniper MX 240 running Junos 15.1.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
${cpePublicInterface}
l The name of the Juniper interface where the CPE IP address is configured.
l Example value: ge-0/0/1.0
l These interfaces correspond to one of the four encryption ASICs on the MS-MPC card.
l You can distribute load across the ASICs by spreading your tunnels across them.
l Example values: ms-2/3/0, ms-2/3/1, ms-2/3/2
l For every tunnel, you will need an ms-mpc interface pair of units.
l One represents the outside of the IPSec tunnel.
l The other the inside of the tunnel.
l The router forwards packets from your on-premises network to your VCN into the inside
unit.
o The encryption ASIC then encrypts the packets based on the rules and policies.
o Then, the encrypted packet egresses out the outside unit as an ESP packet, ready
to be forwarded to Oracle's VPN headend routers.
l There are a little over 16,000 possible values for unit numbers.
o One way to allocate the units is to offset them by 8,000.
o You can pick values between 0 - 7999 for insideMsUnits and 8000-15999 for
outsideMsUnits.
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
CPE Configuration
Note on routing-instances: If you are using routing-instances on your CPE, you need to
make sure you account for them in your configuration. The main configuration template below
does not account for routing-instances, so you would need to merge the following config
snippet into the non-routing-instance templates.
routing-instances {
${customer premise routing instance} {
interface ${msInterface1}.${insideMsUnit1};
interface ${msInterface2}.${insideMsUnit2};
interface ${msInterface3}.${insideMsUnit3};
routing-options {
static {
route ${vcnCidrBlock} next-hop [ ${msInterface1}.${insideMsUnit1}
${msInterface2}.${insideMsUnit2} ${msInterface3}.${insideMsUnit3} ]
}
}
}
${internet-routing-instance} {
interface ${msInterface1}.${insideMsUnit1};
interface ${msInterface2}.${insideMsUnit2};
interface ${msInterface3}.${insideMsUnit3};
}
}
services {
service-set oracle-vpn-tunnel-${ipAddress1} {
ipsec-vpn-options {
local-gateway ${cpePublicIpAddress} routing-instance ${internet-routing-instance};
}
}
service-set oracle-vpn-tunnel-${ipAddress2} {
ipsec-vpn-options {
local-gateway ${cpePublicIpAddress} routing-instance ${internet-routing-instance};
}
}
service-set oracle-vpn-tunnel-${ipAddress2} {
ipsec-vpn-options {
local-gateway ${cpePublicIpAddress} routing-instance ${internet-routing-instance};
}
}
}
This configures the Juniper MX interfaces that represent the "inside" and "outside" of the
IPSec tunnels. There is one pair of interface/units per tunnel. The unit numbers must be
unique on your router.
interfaces {
${msInterface1} {
unit ${insideMsUnit1} {
description oracle-vpn-tunnel-${ipAddress1}-INSIDE;
family inet;
service-domain inside;
}
unit ${outsideMsUnit1} {
description oracle-vpn-tunnel-${ipAddress1}-OUTSIDE;
family inet;
service-domain outside;
}
}
${msInterface2} {
unit ${insideMsUnit2} {
description oracle-vpn-tunnel-${ipAddress2}-INSIDE;
family inet;
service-domain inside;
}
unit ${outsideMsUnit2} {
description oracle-vpn-tunnel-${ipAddress2}-OUTSIDE;
family inet;
service-domain outside;
}
}
${msInterface3} {
unit ${insideMsUnit3} {
description oracle-vpn-tunnel-${ipAddress3}-INSIDE;
family inet;
service-domain inside;
}
unit ${outsideMsUnit3} {
description oracle-vpn-tunnel-${ipAddress3}-OUTSIDE;
family inet;
service-domain outside;
}
}
}
The IPSec tunnels require static routes to get traffic to go through them. The routes are
configured to point down the tunnel interfaces. If the IPSec tunnel is down, the Juniper MX will
stop using that route. You should redistribute these routes into your customer premise
network.
routing-options {
static {
route ${vcnCidrBlock} next-hop [ ${msInterface1}.${insideMsUnit1}
${msInterface2}.${insideMsUnit2} ${msInterface3}.${insideMsUnit3} ]
}
}
S ERVICES CONFIGURATION
next-hop-service {
inside-service-interface ${msInterface3}.${insideMsUnit3};
outside-service-interface ${msInterface3}.${outsideMsUnit3};
}
ipsec-vpn-options {
local-gateway ${cpePublicIpAddress}
}
ipsec-vpn-rules oracle-vpn-tunnel-${ipAddress3};
}
ipsec-vpn {
rule oracle-vpn-tunnel-${ipAddress1} {
term 1 {
from {
ipsec-inside-interface ${msInterface1}.${insideMsUnit1};
}
then {
remote-gateway ${ipAddress1};
dynamic {
ike-policy ike-policy-${ipAddress1};
ipsec-policy oracle-ike-policy;
}
tunnel-mtu 1420;
initiate-dead-peer-detection;
dead-peer-detection {
interval 5;
threshold 4;
}
}
}
match-direction input;
}
rule oracle-vpn-tunnel-${ipAddress2} {
term 1 {
from {
ipsec-inside-interface ${msInterface2}.${insideMsUnit2};
}
then {
remote-gateway ${ipAddress2};
dynamic {
ike-policy ike-policy-${ipAddress2};
ipsec-policy oracle-ike-policy;
}
tunnel-mtu 1420;
initiate-dead-peer-detection;
dead-peer-detection {
interval 5;
threshold 4;
}
}
}
match-direction input;
}
rule oracle-vpn-tunnel-${ipAddress3} {
term 1 {
from {
ipsec-inside-interface ${msInterface3}.${insideMsUnit3};
}
then {
remote-gateway ${ipAddress3};
dynamic {
ike-policy ike-policy-${ipAddress3};
ipsec-policy oracle-ipsec-policy;
}
tunnel-mtu 1420;
initiate-dead-peer-detection;
dead-peer-detection {
interval 5;
threshold 4;
}
}
}
match-direction input;
}
ipsec {
proposal esp-aes256-sha1 {
protocol esp;
authentication-algorithm hmac-sha1-96;
encryption-algorithm aes-256-cbc;
lifetime-seconds 3600;
}
policy oracle-ipsec-policy {
perfect-forward-secrecy {
keys group5;
}
proposals [ esp-aes256-sha1 ];
}
}
ike {
proposal aes256-sha384-group5 {
authentication-method pre-shared-keys;
dh-group group5;
authentication-algorithm sha-384;
encryption-algorithm aes-256-cbc;
lifetime-seconds 28800;
}
policy oracle-ike-policy-${ipAddress1} {
mode main;
version 1;
proposals [ aes256-sha384-group5 ];
local-id ipv4_addr ${cpePublicIpAddress};
remote-id ipv4_addr ${ipAddress1};
pre-shared-key ascii-text ${sharedSecret1};
}
policy oracle-ike-policy-${ipAddress2} {
mode main;
version 1;
proposals [ aes256-sha384-group5 ];
local-id ipv4_addr ${cpePublicIpAddress};
remote-id ipv4_addr ${ipAddress2};
pre-shared-key ascii-text ${sharedSecret2};
}
policy oracle-ike-policy-${ipAddress3} {
mode main;
version 1;
proposals [ aes256-sha384-group5 ];
local-id ipv4_addr ${cpePublicIpAddress};
remote-id ipv4_addr ${ipAddress3};
pre-shared-key ascii-text ${sharedSecret3};
}
}
}
}
Juniper SRX
This configuration was validated using a Juniper SRX 240 running 12.1X44-D35.5.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpePublicIpAddress}
l The public IP address for the CPE (previously made available to Oracle via the Console).
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
${cpePublicInterface}
l The name of the Juniper interface where the CPE IP address is configured.
l Example value: ge-0/0/1.0
l You will need multiple tunnel unit numbers per IPSec connection.
l One per IPSec tunnel.
l Oracle recommends setting up all Oracle configured tunnels for maximum redundancy.
l The Juniper SRX uses the st0 interface for IPSec tunnels.
l Example values: 1, 2, 3
${InsideZoneName}
l This zone contains the interfaces that are part of your internal network that need to
reach resources in your Oracle VCN.
${OracleVpnZoneName}
l This zone contains the interfaces that are part of the Oracle Cloud Infrastructure
network.
l This includes the inside of the IPSec tunnel interfaces.
${InternetSecurityZoneName}
Each region has multiple Oracle IPSec headends. The template below will allow setting up
three tunnels on your CPE, one to each headend. In the table below, "User" is you/your
company.
${tunnelUnit1} User 1
${tunnelUnit2} User 2
${tunnelUnit3} User 3
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
CPE Configuration
This configures the Juniper SRX interfaces that represent the "inside" of the IPSec tunnels.
There is one interface/unit per tunnel.
interfaces {
st0 {
unit ${tunnelUnit1} {
family inet;
}
unit ${tunnelUnit2} {
family inet;
}
unit ${tunnelUnit3} {
family inet;
}
}
}
The IPSec tunnels require static routes to get traffic to go through them. The routes are
configured to point down the tunnel interfaces. If the IPSec tunnel is down, the Juniper SRX
will stop using that route. You should redistribute these routes into your customer premise
network.
routing-options {
static {
route ${VcnCidrBlock} next-hop [ st0.${tunnelUnit1} st0.${tunnelUnit2} st0.${tunnelUnit3} ];
}
}
S ECURITY POLICY
security {
ike {
proposal oracle-ike-proposal {
authentication-method pre-shared-keys;
dh-group group5;
authentication-algorithm sha-384;
encryption-algorithm aes-256-cbc;
lifetime-seconds 28800;
}
policy ike_pol_oracle-vpn-${ipAddress1} {
mode main;
proposals oracle-ike-proposal;
pre-shared-key ascii-text ${sharedSecret1}
}
policy ike_pol_oracle-vpn-${ipAddress2} {
mode main;
proposals oracle-ike-proposal;
pre-shared-key ascii-text ${sharedSecret2}
}
policy ike_pol_oracle-vpn-${ipAddress3} {
mode main;
proposals oracle-ike-proposal;
pre-shared-key ascii-text ${sharedSecret3}
}
gateway gw_oracle-${ipAddress1} {
ike-policy ike_pol_oracle-vpn-${ipAddress1};
address ${ipAddress1};
dead-peer-detection;
external-interface ${cpePublicInterface};
}
gateway gw_oracle-${ipAddress2} {
ike-policy ike_pol_oracle-vpn-${ipAddress2};
address ${ipAddress2};
dead-peer-detection;
external-interface ${cpePublicInterface};
}
gateway gw_oracle-${ipAddress3} {
ike-policy ike_pol_oracle-vpn-${ipAddress3};
address ${ipAddress3};
dead-peer-detection;
external-interface ${cpePublicInterface};
}
}
ipsec {
vpn-monitor-options;
proposal oracle-ipsec-proposal {
protocol esp;
authentication-algorithm hmac-sha1-96;
encryption-algorithm aes-256-cbc;
lifetime-seconds 3600;
}
policy ipsec_pol_oracle-vpn {
perfect-forward-secrecy {
keys group5;
}
proposals oracle-ipsec-proposal;
}
vpn oracle-vpn-${ipAddress1} {
bind-interface st0.${tunnelUnit1};
vpn-monitor;
ike {
gateway gw_oracle-${ipAddress1};
ipsec-policy ipsec_pol_oracle-vpn;
}
establish-tunnels immediately;
}
vpn oracle-vpn-${ipAddress2} {
bind-interface st0.${tunnelUnit2};
vpn-monitor;
ike {
gateway gw_oracle-${ipAddress2};
ipsec-policy ipsec_pol_oracle-vpn;
}
establish-tunnels immediately;
}
vpn oracle-vpn-${ipAddress3} {
bind-interface st0.${tunnelUnit3};
vpn-monitor;
ike {
gateway gw_oracle-${ipAddress3};
ipsec-policy ipsec_pol_oracle-vpn;
}
establish-tunnels immediately;
}
}
policies {
from-zone ${InsideZoneName} to-zone ${OracleVpnZoneName} {
policy new_vpn-out {
match {
source-address any-ipv4;
destination-address any-ipv4;
application any;
source-identity any;
}
then {
permit;
}
}
policy vpn-out {
match {
source-address any-ipv4;
destination-address any-ipv4;
application any;
source-identity any;
}
then {
permit;
}
}
}
}
zones {
security-zone ${OracleVpnZoneName} {
interfaces {
st0.${tunnelUnit1} {
host-inbound-traffic {
protocols {
bgp;
}
}
}
st0.${tunnelUnit2} {
host-inbound-traffic {
protocols {
bgp;
}
}
}
st0.${tunnelUnit3} {
host-inbound-traffic {
protocols {
bgp;
}
}
}
}
}
security-zone ${InternetSecurityZoneName} {
interfaces {
${cpePublicInterface} {
host-inbound-traffic {
system-services {
ike;
ping;
}
}
}
}
}
}
}
Palo Alto
This configuration was validated using a PA-500 running PanOS version 6.0.6.
Get the following parameters from the Oracle Cloud Infrastructure Console or API.
${ipAddress#}
l Oracle VPN headend IPSec tunnel endpoints. There is one value for each tunnel.
l Example value: 129.146.12.52
${sharedSecret#}
l The IPSec IKE pre-shared-key. There is one value for each tunnel.
l Example value: ZNmzNKEDPfAMkD7nTH3SWr6OFabdT6exXn6enSlsKbE
${cpeInterfaceName}
l The name of the CPE interface where the CPE IP address is configured.
l Example Value: ethernet1/1
${VcnCidrBlock}
l When creating the VCN, your company selected this CIDR to represent the IP aggregate
network for all VCN hosts.
l Example Value: 10.0.0.0/20
${tunnelUnit#}
${oracleSecurityZoneName}
l The tunnels need to be placed inside a security zone that defines their access profile.
l Example: "Oracle Cloud Infrastructure"
l Note: The value must be enclosed in quotation marks.
${CpeVirtualRouterName}
l The tunnels terminate into a virtual router in the Palo Alto. You can either terminate
them into an existing virtual router, or configure a new virtual router.
l Example Value: Oracle-virtual-router
Encryption AES-256-cbc
Encryption AES-256-cbc
l local=0.0.0.0/0
l remote=0.0.0.0/0
l service=any
IPSec Configuration
set network tunnel ipsec oracle-ipsec-vpn-${ipAddress1} auto-key ike-gateway oracle-gateway-
${ipAddress1}
set network tunnel ipsec oracle-ipsec-vpn-${ipAddress1} auto-key ipsec-crypto-profile oracle-bare-metal-
cloud-ipsec
FastConnect Overview
Oracle Cloud Infrastructure FastConnect provides an easy way to create a dedicated, private
connection between your data center and Oracle Cloud Infrastructure. FastConnect provides
higher-bandwith options, and a more reliable and consistent networking experience compared
to internet-based connections.
l Private peering: To extend your existing infrastructure into a virtual cloud network
(VCN) in Oracle Cloud Infrastructure (for example, to implement a hybrid cloud, or a lift
and shift scenario). Communication across the connection is with IPv4 private
addresses (typically RFC 1918).
l Public peering: To access public services in Oracle Cloud Infrastructure without using
the internet. For example, Object Storage, the Oracle Cloud Infrastructure Console and
APIs, or public load balancers in your VCN. Communication across the connection is with
IPv4 public IP addresses. Without FastConnect, the traffic destined for public IP
addresses would be routed over the internet. With FastConnect, that traffic goes over
your private physical connection.
In general it's assumed you'll use private peering, and you might also use public peering.
Most of this documentation is relevant to both, with specific details called out for private
versus public.
CoreSite, Reston, VA
Oracle Provider
Concepts
Here are some important concepts to understand (also see the following diagrams):
FASTCONNECT
The general concept of a connection between your existing network and Oracle Cloud
Infrastructure over a private physical network instead of the internet.
FASTCONNECT LOCATION
A specific Oracle data center where you can connect with Oracle Cloud Infrastructure.
METRO AREA
A geographical area (for example, Ashburn) with multiple FastConnect locations. All
locations in a metro area connect to the same set of availability domains for resiliency in
case of failure in a single location.
COLOCATION
The situation where your equipment is deployed into a FastConnect location. If your
network service provider is not on the list of Oracle providers in How and Where to
Connect, you must colocate.
CROSS -CONNECT
In a colocation scenario, this is the physical cable connecting your existing network to
Oracle in the FastConnect location.
In a colocation scenario, this is a link aggregation group (LAG) that contains at least one
cross-connect. You can add additional cross-connects to a cross-connect group as your
bandwidth needs increase. This is applicable only for colocation.
ORACLE PROVIDER
A network service provider with a physical connection to the Oracle Cloud Infrastructure
network in a FastConnect location. See the list of the Oracle providers in How and Where
to Connect. If your provider is in the list, you can use FastConnect through the provider.
Otherwise, you must colocate with Oracle in a FastConnect location.
THIRD-PARTY PROVIDER
A network service provider that is NOT on the list of Oracle providers in How and Where to
Connect. If you have a third-party provider and want to use FastConnect, you must
colocate with Oracle in a FastConnect location.
VIRTUAL CIRCUIT
An isolated network path that runs over one or more physical network connections to
provide a single, logical connection between the edge of your existing network and Oracle
Cloud Infrastructure. Private virtual circuits support private peering, and public virtual
circuits support public peering (see Uses for FastConnect). Each virtual circuit is made up
of information shared between you and Oracle, as well as a provider (if you're connecting
through an Oracle provider). You could have multiple private virtual circuits, for example,
to isolate traffic from different parts of your organization (one virtual circuit for
10.0.1.0/24; another for 172.16.0.0/16), or to provide redundancy.
The following diagram illustrates the two ways to connect to Oracle with FastConnect: either
through colocation with Oracle in the FastConnect location, or through an Oracle provider. In
both cases, the connection goes between the edge of your existing network and Oracle.
Physical Connection
The next two diagrams give more detail about the physical connections. They also show the
metro area that contains the FastConnect location, and a VCN within an Oracle Cloud
Infrastructure region.
The first diagram shows the colocation scenario, with your physical connection to Oracle
within the FastConnect location.
The next diagram shows a scenario with an Oracle provider, with your physical connection to
the provider, and the provider's physical connection to Oracle within the FastConnect location.
The next two diagrams show a private virtual circuit, which is a single, logical connection
between your edge and Oracle Cloud Infrastructure by way of your DRG. Traffic is destined for
private IP addresses in your VCN.
A public virtual circuit gives your existing network access to regional public IPv4 addresses in
Oracle Cloud Infrastructure. For example, Object Storage, the Oracle Cloud Infrastructure
Console and APIs, and public load balancers in your VCN. All communication across a public
virtual circuit uses public IP addresses.
The first diagram shows the colocation scenario with both a private virtual circuit and a public
virtual circuit. Notice that the DRG is not involved with the public virtual circuit, only the
private virtual circuit.
l You choose which of your organization's public IP prefixes you want to use with the
virtual circuit. Each prefix must be /24 or less specific. Oracle verifies your
organization's ownership of each prefix before sending any traffic for it across the
connection. Oracle's verification for a given prefix can take up to three business days.
You can get the status of the verification of each prefix in the Oracle Console or API.
Oracle begins advertising the Oracle Cloud Infrastructure public IP
addresses across the connection only after successfully verifying at least
one of your public prefixes.
l Your existing network will receive Oracle's public IP addresses through both
FastConnect and your Internet Service Provider (ISP). When configuring your edge,
make sure to give higher preference to FastConnect over your ISP, or you will not
receive the benefits of FastConnect.
l You must configure your firewall rules to allow the desired traffic coming from the
Oracle public IP addresses.
l Oracle prefers the most specific route when routing traffic from Oracle Cloud
Infrastructure to other destinations. This means that when Oracle replies to traffic
coming from one of your verified public prefixes, the reply travels over the FastConnect
public virtual circuit, even if you have an internet gateway on your VCN. Replies to
traffic from any public prefixes that are not on your list or not yet verified by Oracle still
go through the internet gateway. For reference, this diagram shows the private and
public virtual circuits as well as an internet gateway on the VCN:
l You can add or remove public IP prefixes at any time by editing the virtual circuit. If you
add a new prefix, Oracle first verifies your company's ownership before advertising it
across the connection. If you remove a prefix, Oracle stops advertising the prefix within
a few minutes of your editing the virtual circuit.
This section is applicable if you're using FastConnect through an Oracle provider. A Border
Gateway Protocol (BGP) session is established from your edge, but where it goes to depends
on which Oracle provider you use.
To Oracle: With some of the Oracle providers, the BGP session goes from your edge to
Oracle, as shown in the following diagram. When setting up the virtual circuit with Oracle, you
are asked to provide basic BGP peering information (see General Requirements).
To the Oracle provider: With other Oracle providers, your BGP session goes from your
edge to the provider's, as shown in the following diagram. When setting up the virtual circuit
with Oracle, you are NOT asked for any BGP session information. Instead, you share BGP
information with your Oracle provider. Notice that there's a separate BGP session that the
provider establishes with Oracle.
General Requirements
Before getting started with FastConnect, make sure you meet the following requirements:
l Oracle Cloud Infrastructure account, with at least one user with appropriate Oracle
Cloud Infrastructure Identity and Access Management (IAM) permissions (for example,
a user in the Administrators group).
l Network equipment that supports Layer 3 routing using BGP.
l For colocation with Oracle: Ability to connect using single mode fiber in your selected
FastConnect location. Also see Hardware and Routing Requirements for Colocation.
l For connection to an Oracle provider: At least one physical network connection with the
provider. Also see Routing Requirements.
l For private peering only: At least one existing DRG set up for your VCN.
l For public peering only: The list of public IP address prefixes that you want to use with
the connection. Oracle will validate your ownership of each prefix.
What's Next?
See these topics to get started:
l FastConnect concepts: Make sure you're familiar with the basic concepts covered in
FastConnect Overview.
l Routing requirements: Review the Routing Requirements, which are particularly
relevant if your BGP session is between your edge and Oracle.
l Tenancy setup and compartment design: If you haven't yet, set up your tenancy.
Think about who needs access to Oracle Cloud Infrastructure and how. For more
information, see "Setting Up Your Tenancy" in the Oracle Cloud Infrastructure Getting
Started Guide. Specifically for FastConnect, see Required IAM Policy to understand the
policy required to work with FastConnect components.
l Cloud network design: Design your virtual cloud network (VCN), including how you
want to allocate your VCN's subnets, define security list rules, define route rules, set up
load balancers, and so on. For more information, see Overview of Networking.
l Redundancy: Think through your overall redundancy model to ensure your network
can handle planned maintenance by Oracle or your organization, and unexpected
failures of the various components. See Network Design for Redundancy.
l Public IP prefixes: If you plan to set up a public virtual circuit, get the list of the
public IP prefixes that you want to use with the connection. Oracle must validate your
organization's ownership of each of the prefixes before advertising each one over the
connection.
l Cloud network setup: Set up your VCN, subnets, DRG, security lists, IAM policies,
and so on, according to your design. For instructions on how to set up the connection
between your VCN and existing network, see Getting Started with FastConnect.
Routing Requirements
Here are general routing requirements for FastConnect.
l BGP prefix limit: For public virtual circuits: 200 prefixes. For private virtual circuits:
2000 prefixes.
l BGP ASN: 2-byte ASN. Public virtual circuits require a public ASN. Oracle's BGP ASN is
31898.
l BGP keep-alive interval: 60s
l BGP hold-time interval: 180s
If your user is not, then a policy like this would generally cover all the Networking resources:
Allow group NetworkAdmins to manage virtual-network-family in tenancy
To only create and manage a virtual circuit, you would need a policy like this:
Allow group VirtualCircuitAdmins to manage drgs in tenancy
The first statement (to manage DRGs) is necessary only for private virtual circuits.
For more information, see Getting Started with Policies and Common Policies.
In general, you should design your network with both of these in mind:
Oracle handles redundancy of the routers and physical circuits in the FastConnect locations.
You should then handle redundancy of the physical connection between your existing network
and the Oracle provider. To do that, create two virtual circuits. Ensure that each runs on a
different physical network connection to the provider, and goes to a different FastConnect
location in the same metro area. Both virtual circuits go to the same DRG (if they're private
virtual circuits). You'll have two separate BGP sessions from your edge to Oracle (one per
virtual circuit). See the following diagram. An active/active configuration for routing traffic
across the two connections is recommended and supported by Oracle.
Also see the sequence diagram in To get the status of your virtual circuit.
Instructions:
Instructions:
Repeat the following steps for each virtual circuit you want to create.
1. In the Console, confirm you're viewing the compartment that you want to work in. If
you're not sure which one, use the compartment that contains the DRG that you'll
connect to (for a private virtual circuit). This choice of compartment, in conjunction with
a corresponding IAM policy, controls who has access to the virtual circuit you're about
to create.
2. Click Networking, and then click FastConnect.
The resulting FastConnect page is where you'll create a new connection and later
return to when you need to manage the connection.
3. Click Create Connection.
4. Select Connect via provider and choose the provider from the list.
The resulting dialog box shows the information required to set up the virtual circuit.
5. Enter the following for your virtual circuit:
l Name: A friendly name that helps you keep track of your virtual circuits. The
value does not need to be unique across your virtual circuits, and you can change
it later. Avoid entering confidential information.
l Create in Compartment: Leave as is (the compartment you're currently
working in).
6. Choose the virtual circuit type (private or public). A private virtual circuit is for private
peering (where your existing network receives your VCN's private IP addresses). A
public virtual circuit is for public peering (where your existing network receives the
Oracle Cloud Infrastructure regional public IP addresses). Also see Uses for
FastConnect.
l For a private virtual circuit, enter the following:
o Dynamic Routing Gateway Compartment: Select the compartment
where the DRG resides (it should already be selected).
o Dynamic Routing Gateway: Select the DRG to route the FastConnect
traffic to.
o Provisioned Bandwidth: Choose your desired value. If your bandwidth
needs increase later, you can update the virtual circuit to use a different
value (see To edit a virtual circuit).
If your BGP session goes to Oracle (see Provider Scenario: BGP Session to Either
Oracle or the Provider), the dialog box will include additional fields for the BGP
session:
o Customer BGP IP Address: The BGP peering IP address for your edge
router, with either a /30 or /31 subnet mask.
o Oracle BGP IP Address: The BGP peering IP address you want to use for
the DRG, with either a /30 or /31 subnet mask.
o Customer BGP ASN: The public or private ASN for your network.
l For a public virtual circuit, enter the following:
o Provisioned Bandwidth: Choose your desired value. If your bandwidth
needs increase later, you can update the virtual circuit to use a different
value (see To edit a virtual circuit).
o Customer BGP ASN: The public ASN for your network. Present only if your
BGP session goes to Oracle (see Provider Scenario: BGP Session to Either
Oracle or the Provider). Note that Oracle specifies the BGP IP addresses for
a public virtual circuit.
o Public IP Prefixes: The public IP prefixes that you want Oracle to receive
over the connection (each one must be /24 or less specific). You can enter a
comma-separated list of prefixes, or one per line.
7. Click Continue.
The virtual circuit is created. Its OCID and a link to the provider's portal are displayed in
the resulting confirmation dialog box. The OCID is also available on the main
FastConnect page and with the other virtual circuit details.
8. Copy and paste the OCID to another location. You will need to give it to your provider in
the next task.
9. Click Close.
The virtual circuit is listed on the FastConnect page. You can click the virtual circuit to see the
full set of details. These items indicate the status of the connection:
l Provider State: Whether the provider is aware of your request to create a virtual
circuit and is provisioning it from their end. At this point, the value is INACTIVE.
l Lifecycle State: The current status of the virtual circuit during the time it's being set
up. At this point, the value is PENDING_PROVIDER.
l Large "VC" icon: While the virtual circuit is still being set up, this large, colored icon
also indicates the Lifecycle State (for example, PENDING_PROVIDER). After the
Lifecycle State switches to PROVISIONED, this icon switches to indicate the state of the
virtual circuit's BGP session (either green/UP or red/DOWN).
Also see the diagram in To get the status of your virtual circuit.
1. Work with your AT&T account team to sign up for NetBond for Cloud services.
In return, you receive a welcome letter with credentials for the AT&T Cloud Services
Portal.
2. Sign in to the AT&T Cloud Services portal and create one Virtual Network Connection
(VNC). You must provide these items: name of AT&T MPLS VPN, region, free-form name
for the VNC, and a minimum bandwidth commitment.
3. Inside the VNC, create a VLAN. You must provide a /29 address space and free-form
name.
In return, you receive a Service Key for AT&T NetBond for Cloud.
4. Give Oracle the Service Key you received in the preceding step. To do this, create a
service request at My Oracle Support.
Oracle will use the Service Key that you provided to provision the virtual circuit(s). The
process takes typically 1-2 business days. During that time, the virtual circuit's Provider
State changes to ACTIVE, and the Lifecycle State changes to PROVISIONING. When the
virtual circuit is completely set up, the Lifecycle State switches to PROVISIONED.
If your network design includes a second physical connection and virtual circuit for
redundancy, repeat the steps above with the second port you've set up with Equinix Cloud
exchange and the second virtual circuit.
Oracle will receive an email and then provision the virtual circuit(s). The process takes
typically 1-2 business days. During that time, the virtual circuit's Provider State changes to
ACTIVE, and the Lifecycle State changes to PROVISIONING. When the virtual circuit is
completely set up, the Lifecycle State switches to PROVISIONED.
If your network design includes a second Cloud Access Port and virtual circuit for redundancy,
repeat the preceding steps with the second Cloud Access Port you've set up with Interxion and
the second virtual circuit.
On the Oracle Console, you will soon see the virtual circuit's Provider State change to
ACTIVE. The Lifecycle State will also change to PROVISIONING. Oracle's system will then
complete the virtual circuit setup, and the Lifecycle State will shortly switch to
PROVISIONED. For more information, see the diagram in To get the status of your virtual
circuit.
If your network design includes a second Megaport and virtual circuit for redundancy, repeat
the preceding steps with the second Megaport you've set up and the second virtual circuit.
Make sure to choose the other building when specifying the Oracle CloudTarget Port for the
virtual circuit.
On the Oracle Console, you will soon see the virtual circuit's Provider State change to
ACTIVE. The Lifecycle State will also change to PROVISIONING. Oracle's system will then
complete the virtual circuit setup, and the Lifecycle State will shortly switch to
PROVISIONED. For more information, see the diagram in To get the status of your virtual
circuit.
Oracle will receive an email and then provision the virtual circuit(s). The process takes
typically 1-2 business days. During that time, the virtual circuit's Provider State changes to
ACTIVE, and the Lifecycle State changes to PROVISIONING. When the virtual circuit is
completely set up, the Lifecycle State switches to PROVISIONED.
successfully configure the BGP session, the virtual circuit's BGP session state switches to UP.
If your BGP session instead goes to the Oracle provider: You still need to configure
your router(s) if you haven't already. You may need to contact your provider to get the
required BGP peering information. This BGP session must be up and running for FastConnect
to work. Also configure your edge router(s) for redundancy according to the network design
you decided on earlier (see Network Design for Redundancy).
1. Make sure that Oracle has successfully verified at least one of the public prefixes you've
submitted. You can see the status of each prefix by viewing the virtual circuit's details in
the Console. When one of the prefixes has been validated, Oracle starts advertising the
regional OCI public addresses over the connection.
2. Launch an instance with a public IP address.
3. Ping the public IP address from a host in your existing private network. You should see
the packet on the FastConnect interface on the virtual circuit. If you do, your
FastConnect public virtual circuit is ready to use. However, remember that only the
public prefixes that Oracle has successfully verified so far are advertised on the
connection.
The following diagram shows the different states of the virtual circuit when you're setting it
up.
l The name
l The bandwidth
l Which DRG it uses (for a private virtual circuit)
l The public IP prefixes (for a public virtual circuit)
l Depending on the situation, you might also have access to the BGP session information
for the virtual circuit and thus can change it.
1. In the Console, go to Networking, and then click FastConnect to view your list of
connections.
2. Select the compartment where the connection resides, and then click the connection to
view its details.
3. Click the virtual circuit to view its details.
4. Click Edit and make your changes.
5. Click Save.
1. In the Console, go to Networking, and then click FastConnect to view your list of
connections.
2. Select the compartment where the connection resides, and then click the connection to
view its details.
3. Click the virtual circuit to view its details.
4. Click Delete.
5. Confirm when prompted.
The virtual circuit's Lifecycle State changes to TERMINATING and then to TERMINATED.
You can specify your public IP prefixes when you create the virtual circuit. See Task 4: Set up
your virtual circuit(s).
You can add or remove public IP prefixes later after creating the virtual circuit. See To edit a
virtual circuit. If you add a new prefix, Oracle first verifies your company's ownership before
advertising it across the connection. If you remove a prefix, Oracle stops advertising the
prefix within a few minutes of your editing the virtual circuit.
You can view the state of Oracle's verification of a given public prefix by viewing the virtual
circuit's details in the Console. Here are the possible values:
l FastConnect concepts: Make sure you're familiar with the basic concepts covered in
FastConnect Overview.
l Requirements: Review the hardware and routing requirements.
l Tenancy setup and compartment design: If you haven't yet, set up your tenancy.
Think about who needs access to Oracle Cloud Infrastructure and how. For more
information, see "Setting Up Your Tenancy" in the Oracle Cloud Infrastructure Getting
Started Guide. Specifically for FastConnect, see Required IAM Policy to understand the
policy required to work with FastConnect components.
l Cloud network design: Design your virtual cloud network (VCN), including how you
want to allocate your VCN's subnets, define security list rules, define route rules, set up
load balancers, and so on. For more information, see Overview of Networking.
l Redundancy: Think through your overall redundancy model to ensure your network
can handle planned maintenance by Oracle or your organization, and unexpected
failures of the various components.
l Public IP prefixes: If you plan to set up a public virtual circuit, get the list of the
public IP prefixes that you want to use with the connection. Oracle must validate your
organization's ownership of each of the prefixes before advertising each one over the
connection.
l Cloud network setup: Set up your VCN, subnets, DRG, security lists, IAM policies,
and so on, according to your design. For instructions on how to set up the connection
between your VCN and existing network, see Getting Started with FastConnect.
l Ethernet: 10 GbE
l Fiber type: 1310 NM Single Mode
l Signal loss: <-12 dB
l Optics: 10G LR
l Fiber redundancy: Multiple 10GE with device-level redundancy
l Minimum capacity: 2 x 10 GbE
l LAG protocol: LACP with short timers (3 @ 1s)
l VLAN tagging: 802.1q (single tag)
l VLAN range: 100-4094 (VLANs are assigned by you)
l Maximum interface MTU: 9196 (include 4-byte FCS trailer)
For routing:
If your user is not, then a policy like this would generally cover all the Networking resources:
Allow group NetworkAdmins to manage virtual-network-family in tenancy
To only create and manage cross-connects, cross-connect groups, and virtual circuits, you
would need a policy like this:
Allow group FastConnectAdmins to manage drgs in tenancy
The first statement (to manage DRGs) is necessary only for private virtual circuits.
For more information, see Getting Started with Policies and Common Policies.
Instructions:
Instructions:
1. In the Console, confirm you're viewing the compartment that you want to work in. If
you're not sure which one, use the compartment that contains the DRG that you'll
connect to (for a private virtual circuit). This choice of compartment, in conjunction with
a corresponding IAM policy, controls who has access to the cross-connect group and
cross-connect(s) you're about to create.
2. Click Networking, and then click FastConnect.
The resulting FastConnect page is where you'll create a new connection and later
return to when you need to manage the connection and its components.
l FastConnect (FC) icon: The large icon in the top-left corner. It shows the general
status of the overall FastConnect connection and whether you need to take action. At
this point, the FC status will be ACTION REQUIRED (see the next task).
l Cross-connect group (CCG) icon: The icon near the middle of the page. It shows the
status of the cross-connect group itself. At this point, the CCG status will be PENDING
PROVISIONING.
l Cross-connect (CC) icon: The icon on the right side of the page. It shows the status
of a given cross-connect. At this point, the CC status will be PENDING CUSTOMER.
Later, when you add a virtual circuit to your provisioned cross-connect group, under the CCG
icon there will be a VC icon that shows the status of the virtual circuit.
In the Console, you can see the light levels that Oracle detects by viewing the details of the
cross-connect and clicking Light Levels, as shown in the following screenshot:
In the Console, you can see the status of Oracle's side of the interfaces (up or down) by
viewing the details of the cross-connect and clicking Light Levels.
Instructions:
If you have other cross-connects in this group that are ready to use, wait for the first to be
provisioned, and then activate the next one. Only one cross-connect can be activated and then
provisioned in a group at a time.
l FastConnect (FC) icon: The FC status remains as ACTION REQUIRED to indicate that
you have another action to take (see the next task).
l Cross-connect group (CCG) icon: The CCG status switches to PROVISIONED to
indicate that the cross-connect group is ready to use.
l Cross-connect (CC) icon: The CC status switches to PROVISIONING and then changes
to PROVISIONED (typically within one minute).
Instructions:
1. In the Console, return to the connection you created earlier, and click the Virtual
Circuits tab on the left side of the page.
2. Click Create Virtual Circuit.
In the resulting dialog box, you can add one or more virtual circuits to run on the cross-
connect group.
3. Enter the following for your virtual circuit:
l Name: A friendly name that helps you keep track of your virtual circuits. The
value does not need to be unique across your virtual circuits, and you can change
it later. Avoid entering confidential information.
l Create in Compartment: Select the compartment where you want to create the
virtual circuit. If you're not sure, select the current compartment (where the DRG
resides). This choice of compartment, in conjunction with a corresponding IAM
policy, controls who has access to the virtual circuit.
4. Choose the virtual circuit type (private or public). A private virtual circuit is for private
peering (where your existing network receives your VCN's private IP addresses). A
public virtual circuit is for public peering (where your existing network receives the
Oracle Cloud Infrastructure regional public IP addresses). Also see Uses for
FastConnect.
l For a private virtual circuit, enter the following:
o Dynamic Routing Gateway Compartment: Select the compartment
where the DRG resides (it should already be selected).
o Dynamic Routing Gateway: Select the DRG to route the FastConnect
traffic to.
o Provisioned Bandwidth: Choose your desired value. If your bandwidth
needs increase later, you can update the virtual circuit to use a different
value (see To edit a virtual circuit).
o Customer BGP ASN: The public or private ASN for your network.
o VLAN: The number of the VLAN to use for this virtual circuit. It must be a
VLAN that is not already assigned to another virtual circuit.
o Customer BGP IP Address: The BGP peering IP address for your edge,
with either a /30 or /31 subnet mask.
o Oracle BGP IP Address: The BGP peering IP address you want to use for
the Oracle edge, with either a /30 or /31 subnet mask.
l For a public virtual circuit, enter the following:
o Provisioned Bandwidth: Choose your desired value. If your bandwidth
needs increase later, you can update the virtual circuit to use a different
value (see To edit a virtual circuit).
o Customer BGP ASN: The public ASN for your network. Note that Oracle
specifies the BGP IP addresses for a public virtual circuit.
o VLAN: The number of the VLAN to use for this virtual circuit. It must be a
VLAN that is not already assigned to another virtual circuit.
o Public IP Prefixes: The public IP prefixes that you want Oracle to receive
over the connection (each one must be /24 or less specific). You can enter a
comma-separated list of prefixes, or one per line.
5. Click Create.
The virtual circuit is created. Its status is now included on the main connection's details.
VLAN isn't configured correctly, or if there any other problems. Otherwise the status
switches to UP.
Also configure the router for redundancy according to the network design you decided on
earlier. After you successfully configure BGP and the VLAN, the virtual circuit's status will
switch to UP.
l FastConnect (FC) icon: The FC status switches to PROVISIONED when the BGP
session is established. For a public virtual circuit, instead of switching to PROVISIONED,
the status may switch to either IP CHECK IN PROGRESS or IP CHECK FAILED (if one of
your public prefixes failed Oracle's verification). When Oracle successfully verifies all
the prefixes, the FC status switches to PROVISIONED.
l Cross-connect group (CCG) icon: The CCG status remains as PROVISIONED.
l Cross-connect (CC) icon: The CC status remains as PROVISIONED.
l Virtual circuit (VC) icon: The virtual circuit's status switches to UP when the BGP
session is established.
1. Make sure that Oracle has successfully verified at least one of the public prefixes you've
submitted. You can see the status of each prefix by viewing the virtual circuit's details in
the Console. When one of the prefixes has been validated, Oracle starts advertising the
regional OCI public addresses over the connection.
2. Launch an instance with a public IP address.
3. Ping the public IP address from a host in your existing private network. You should see
the packet on the FastConnect interface on the virtual circuit. If you do, your
FastConnect public virtual circuit is ready to use. However, remember that only the
public prefixes that Oracle has successfully verified so far are advertised on the
connection.
The following screenshot shows an example of the connection details, after you create the
cross-connect group with a single cross-connect:
Here are reasons for particular status values for the overall connection:
ACTION REQUIRED:
l You need to request cabling at the FastConnect location for the cross-connect group you
just created.
l Or, you need to activate a cross-connect (make sure it's ready to use first).
l Or, you need to set up at least one virtual circuit on your cross-connect group to
complete setup for FastConnect.
DOWN:
In general this means you've created a virtual circuit, but configuration is incomplete or
incorrect:
IP CHECK IN PROGRESS:
For public virtual circuits only. This status means Oracle is in the process of verifying the
public prefixes you've submitted. This status is shown if verification of at least one prefix is
still in progress, and no prefixes have failed verification. You can get the status of each
individual prefix you submitted by viewing the virtual circuit's details.
IP CHECK FAILED:
For public virtual circuits only. This means at least one of the public prefixes you've submitted
failed Oracle's verification. That means Oracle will not advertise that prefix over the virtual
circuit.
The following table summarizes the different states of each component involved in the
connection at different points during setup:
l ACTION
REQUIRED (if
you haven't yet
submitted any
prefixes to
Oracle)
l IP CHECK IN
PROGRESS (if
at least one
prefix is still
being
validated)
l IP CHECK
FAILED (if at
least one prefix
failed)
After you activate the cross-connect, the status of the overall connection will be
PROVISIONING until Oracle's system provisions the new cross-connect. Then the status will
switch to PROVISIONED.
l The name
l The bandwidth
l Which DRG it uses (for a private virtual circuit)
l Which VLAN it uses
l The BGP session information
l The public IP prefixes (for a public virtual circuit)
1. In the Console, go to Networking, and then click FastConnect to view your list of
connections.
2. Select the compartment where the connection resides, and then click the connection to
view its details.
3. Click Virtual Circuits, and then click the virtual circuit to view its details.
4. Click Edit and make your changes.
5. Click Save.
To terminate a cross-connect
If you have multiple cross-connects to delete in a cross-connect group, wait until the state of
the first one changes to TERMINATED before deleting the next one. Also, you can't delete a
cross-connect if it's the last provisioned cross-connect in a cross-connect group that's being
used by a provisioned virtual circuit.
1. In the Console, go to Networking, and then click FastConnect to view your list of
connections.
2. Select the compartment where the connection resides, and then click the connection to
view its details.
3. Click the cross-connect you want to delete.
4. Click Delete.
5. Confirm when prompted.
1. In the Console, go to Networking, and then click FastConnect to view your list of
connections.
2. Select the compartment where the connection resides, and then click the connection to
view its details.
3. Click Cross-Connect Groups, and then click the cross-connect group to view its
details.
4. Click Delete.
5. Confirm when prompted.
You can specify your public IP prefixes when you create the virtual circuit. See Task 8: Set up
your virtual circuit(s).
You can add or remove public IP prefixes later after creating the virtual circuit. See To edit a
virtual circuit. If you add a new prefix, Oracle first verifies your company's ownership before
advertising it across the connection. If you remove a prefix, Oracle stops advertising the
prefix within a few minutes of your editing the virtual circuit.
You can view the state of Oracle's verification of a given public prefix by viewing the virtual
circuit's details in the Console. Here are the possible values:
Network Performance
The content in the sections below apply to Category 7 and Section 3.c of the Oracle PaaS
and IaaS Public Cloud Services Pillar documentation.
Oracle Cloud Infrastructure provides a service-level agreement (SLA) for network throughput
between instances in the same availability domain in a virtual cloud network (VCN).
To meet the SLA, the network throughput for instances within the same availability domain
and VCN must be at least 90% of the stated maximum for at least 99.9% of the billing month.
Network throughput is measured in megabits per second (Mbps) or gigabits per second
(Gbps).
For the stated maximum bandwidth by instance shape, see the "Network Bandwidth" column
in the "Shape" tables.
Testing Methodology
Summary: Launch two bare metal instances in the same availability domain and VCN. Install
and run the iperf3 utility, with one instance as server and the other as client. Look at the
iperf3 bandwidth results to determine your VCN's network throughput.
Instructions:
1. Launch two bare metal instances in the same availability domain in a single VCN.
Designate one as the server and the other as the client. For launch instructions, see
Launching an Instance.
2. Install iperf3 on both instances. Example Linux command:
sudo yum install -y iperf3
3. Enable communication to the server instance on TCP port 5201 (for iperf3):
a. For the subnet that the server instance is in, add a rule to the subnet's security list
to allow stateless ingress traffic on TCP port 5201 from any source IP address
(0.0.0.0/0) and any source port. For instructions, see To update an existing
security list.
b. On the instance itself, open the firewall to allow iperf3 traffic. Example Linux
commands:
sudo firewall-cmd --zone=public --permanent --add-port 5201/tcp
b. On the client instance, run iperf3 in client mode and specify the private IP
address of the server instance. Example Linux command:
iperf3 -c <server_instance_private_ip_address>
5. Look at the iperf3 results on the client instance. The network throughput between the
two instances is shown under "Bandwidth" in the last five lines of the client's iperf3
iperf Done.
A: Make sure that the CPU on the instance isn't loaded heavily with other services or
applications. Confirm this with a utility such as top to look at the average CPU utilization. It
should be less than one.
Troubleshooting
These topics cover some common issues you might run into and how to address them:
l Hanging Connection
l Subnet Deletion
Hanging Connection
This topic covers one of the most common issues seen with communications between your
cloud network and on-premises network: a hanging connection, even though you can ping
hosts across the connection.
Symptom: Your virtual cloud network (VCN) is connected to your existing on-premises
network via an IPSec VPN, or Oracle Cloud Infrastructure FastConnect. Hosts on one side of
the connection can ping hosts on the other side, but the connection hangs. For example:
l You can SSH to a host across the connection, but after you log in to the host, the
connection hangs.
l You can start a Virtual Networking Computing (VNC) connection, but the session hangs.
l You can start an SFTP download, but the download hangs.
General problem: Path Maximum Transmission Unit Discovery (PMTUD) is probably not
working on one or both sides of the connection. It must be working on both sides of the
connection so that both sides can know if they're trying to send packets that are too large for
the connection and adjust accordingly. For a brief overview of Maximum Transmission Unit
(MTU) and PMTUD, see Overview of MTU and Overview of PMTUD.
1. Make sure your hosts are configured to use PMTUD: If the hosts in your on-
premises network don't use PMTUD (that is, if they don't set the Don't Fragment flag in
the packets), they have no way to discover if they're sending packets that are too large
for the connection. Your instances on the Oracle side of the connection use PMTUD by
default. Do not change that configuration on the instances.
2. Make sure both the VCN security lists and the instance firewalls accept ICMP
type 3 code 4 messages: When PMTUD is in use, the sending hosts receive a special
ICMP message if they send packets that are too large for the connection. Upon receipt
of the message, the host can dynamically update the size of the packets to fit the
connection. However, your instances can't receive these important ICMP messages if
the security lists for the subnet in the VCN and/or the instance firewalls aren't
configured to accept them. To check to see if a host is receiving the messages, see
Finding Where PMTUD Is Broken.
3. Make sure your router honors the Don't Fragment flag: If the router doesn't
honor the flag and thus ignores the use of PMTUD, it sends fragmented packets to the
instances in the VCN, which is bad (see Why Avoid Fragmentation?). The VCN's security
lists are most likely configured in such a way that they recognize only the initial
fragment, and the remaining ones are dropped, causing the connection to hang.
Instead, your router should use PMTUD and honor the Don't Fragment flag to determine
the correct size of unfragmented packets to send through the connection.
The parts of the solution are numbered and called out in red italics in the following diagram. It
shows an example scenario with your on-premises network connected to your VCN over an
IPSec VPN.
Keep reading for a brief overview of MTU and PMTUD, and how to check if PMTUD is working
on both sides of the network connection.
You may be wondering why you want to avoid fragmentation. First, it adversely affects the
performance of your application. Fragmentation requires reassembly of the fragments and
retransmission if fragments are lost. Reassembly and retransmission require time and CPU
resources.
Second, only the first fragment contains the source and destination port information. This
means the other packets will probably be dropped by firewalls or your VCN's security lists,
which are typically configured to evaluate the port information. For fragmentation to work
with your firewalls and security lists, you would have to configure them to be more
permissive than usual, which is not desirable.
Overview of MTU
The communications between any two hosts across an Internet Protocol (IP) network use
packets. Each packet has a source and destination IP address and a payload of data. Every
network segment between the two hosts has a Maximum Transmission Unit (MTU) that
represents the number of bytes that a single packet can carry.
Across the internet, the MTU is 1500 bytes. This is also true for most home networks and
many corporate networks (and their Wi-Fi networks). Some data centers, including those for
Oracle Cloud Infrastructure, have a larger MTU. The Compute instances use an MTU of 9000
by default. On a Linux host, you can use the ifconfig command to display the MTU of the
host's network connection. For example, here's the ifconfig output from an Ubuntu instance
(the MTU is highlighted in red italics):
ifconfig
ens3 Link encap:Ethernet HWaddr 00:00:17:01:17:83
inet addr:10.0.6.9 Bcast:10.0.6.31 Mask:255.255.255.224
inet6 addr: fe80::200:17ff:fe01:1783/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1
For comparison, here's the output from a machine connected to a corporate network:
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST>
mtu 1500
If the host is connected through a corporate VPN, the MTU is even smaller, because the VPN
tunnel must encapsulate the traffic inside an IPSec packet and send it across the local
network. For example:
ifconfig
utun0: flags=81d1<UP,POINTOPOINT,RUNNING,NOARP,PROMISC,MULTICAST>
mtu 1300
How do the two hosts figure out how large of a packet they can send to each other? For many
types of network traffic, such as HTTP, SSH, and FTP, the hosts use the Transmission Control
Protocol (TCP) to establish new connections. During the initial three-way handshake between
two hosts, they each send the Maximum Segment Size (MSS) for how large their payload can
be. This is smaller than the MTU. (TCP runs inside the Internet Protocol (IP), which is why it's
referred to as TCP/IP. Segments are to TCP what packets are to IP.)
Using the tcpdump application, you can see the MSS value shared during the handshake.
Here's an example from tcpdump (with the MSS highlighted in red italics):
12:11:58.846890 IP 129.146.27.25.22 > 10.197.176.19.58824: Flags [S.], seq
2799552952, ack 2580095593, win 26844, options [mss 1260,sackOK,TS val
44858491 ecr 1321638674,nop,wscale 7], length 0
The preceding packet is from an SSH connection to an instance from a laptop connected to a
corporate VPN. The local network the laptop uses for its internet connection has an MTU of
1500 bytes. The VPN tunnel enforces an MTU of 1300 bytes. Then when the SSH connection is
attempted, TCP (running inside the IP connection) tells the Oracle Cloud Infrastructure
instance that it supports TCP segments that are less than or equal to 1260 bytes. With a
corporate VPN connection, the laptop connected to the VPN typically has the smallest MTU and
MSS compared to anything it's communicating with across the internet.
A more complex case is when the two hosts have a larger MTU than some network link
between them that is not directly connected to either of them. The following diagram
illustrates an example.
In this example, there are two servers, each directly connected to its own routed network that
supports a 9000-byte MTU. The servers are in different data centers. Each data center is
connected to the internet, which supports a 1500-byte MTU. There is an IPSec VPN tunnel
between the two data centers. That tunnel crosses the internet, so the inside of the tunnel has
a smaller MTU than the internet. In this diagram, the MTU is 1380 bytes.
If the two servers try to communicate (with SSH, for example), during the three-way
handshake, they agree on an MSS around 8960. The initial SSH connection might succeed,
because the maximum packet sizes during the initial SSH connection setup are usually less
than 1380 bytes. When one side tries to send a packet larger than the smallest link between
the two endpoints, Path MTU Discovery (PMTUD) becomes critical.
Overview of PMTUD
Path MTU Discovery is defined in RFC 1191. It works by requiring the two communicating
hosts to set a Don't Fragment flag in the packets they each send. If a packet from one of these
hosts reaches a router where the egress (or outbound) interface has an MTU smaller than the
packet length, the router drops that packet. The router also returns an ICMP type 3 code 4
message to the host. This message specifically says "Destination Unreachable, Fragmentation
Needed and Don't Fragment Was Set" (defined in RFC 792). Effectively the router tells the
host: "You told me not to fragment packets that are too large, and this one's too large. I'm not
sending it." The router also tells the host the maximum size packets allowed through that
egress interface. The sending host then adjusts the size of its outbound packets so they're
smaller than the value the router provided in the message.
Here's an example that shows the results when an instance tries to ping a host (8.8.8.8) over
the internet with an 8000-byte packet and the Don't Fragment flag set (that is, with PMTUD in
use). The returned ICMP message is highlighted in red italics:
ping 8.8.8.8 -M do -s 8000
PING 8.8.8.8 (8.8.8.8) 8000(8028) bytes of data.
From 4.16.139.250 icmp_seq=1 Frag needed and DF set (mtu = 1500)
The response is exactly what's expected. The destination host is across the internet, which
has an MTU of 1500 bytes. Even though the sending host's local network connection has an
MTU of 9000 bytes, the host can't reach the destination host with the 8000-byte packet and
gets an ICMP message accordingly. PMTUD is working correctly.
For comparison, here's the same ping, but the destination host is across an IPSec VPN tunnel:
ping 192.168.6.130 -M do -s 8000
PING 192.168.6.130 (192.168.6.130) 8000(8028) bytes of data.
From 129.146.13.49 icmp_seq=1 Frag needed and DF set (mtu = 1358)
Here the VPN router sees that to send this packet to its destination, the outbound interface is a
VPN tunnel. That tunnel goes across the internet, so the tunnel must fit inside the internet's
1500-byte MTU link. The result is that the inside of the tunnel only allows packets up to 1360
bytes (which the router then lowered to 1358, which can make things more confusing).
If PMTUD isn't working somewhere along the connection, you need to figure out why and
where. Typically it's because the ICMP type 3 code 4 packet (from the router with the
constrained link that can't fit the packet) never gets back to the sending host. This can happen
if there's something blocking that kind of traffic between the host and the router. And it can
happen on either side of the VPN tunnel (or other constrained MTU link).
To troubleshoot the broken PMTUD, you must determine if PMTUD is working on each side of
the connection. In this scenario, let's assume the connection is an IPSec VPN.
How to ping: Like in Overview of PMTUD, ping a host on the other side of the connection with
a packet that you know is too large to fit through the VPN tunnel (for example, 1500 bytes or
larger). Depending on which operating system the sending host uses, you might need to
format the ping command slightly different to ensure the Don't Fragment flag is set. For both
Ubuntu and Oracle Linux, you use the -M flag with the ping command.
Here's an example ping (with the -M flag and the resulting ICMP message highlighted in red
italics)
ping -M do -s 1500 192.168.6.130
PING 192.168.6.130 (192.168.6.130) 1500(1528) bytes of data.
From 129.146.13.49 icmp_seq=1 Frag needed and DF set (mtu = 1358)
Make sure to also ping from the other side of the connection to confirm PMTUD is working
from that side. Both sides of the connection must recognize that there is a tunnel between
them that can't fit the large packets.
Bad: If you're testing your side of the connection and the ping succeeds
If you're sending the ping from a host in your on-premises network, and the ping succeeds,
that probably means your edge router is not honoring the Don't Fragment flag. Instead the
router is fragmenting the large packet. The first fragment reaches the destination host, so the
ping succeeds, which is misleading. If you try to do more than just ping, the fragments after
the first get dropped, and the connection will hang.
Make sure to configure your router to honor the Don't Fragment flag. The router's
default configuration is to honor it, but someone might have changed the default.
Bad: If you're testing the VCN side of the connection and you don't see the
ICMP message
When testing from the VCN side of the connection, if you don't see the ICMP message in the
response, there is probably something dropping the ICMP packet before it reaches your
instance. There could be two issues:
l Security list: The Networking security list could be missing an ingress rule that allows
ICMP type 3 code 4 messages to reach the instance. The default security list that comes
with your VCN has this rule. However, it's possible that the instance's subnet is using a
different security list, or someone has removed the required rule from the default
security list. Make sure that the subnet the instance is in has a security list
with an ingress rule that allows ICMP traffic type 3 code 4 from source
0.0.0.0/0 and any source port. It doesn't matter if the rule is stateful or stateless
(the traffic is one-way and treated as stateless regardless). For more information, see
Security Lists, and specifically To update an existing security list.
l Instance firewall: The instance's firewall rules (set in the OS) could be missing a rule
that allows ICMP type 3 code 4 messages to reach the instance. Specifically for a Linux
instance, make sure that iptables or firewalld is configured to allow the ICMP type 3
code 4 messages.
Oracle recommends using PMTUD. However, in some situations it's possible to configure
servers so they don't need to rely on it. Consider the case of the instances in your VCN
communicating across an IPSec VPN to hosts in your on-premises network. You know the
range of IP addresses for your on-premises network. You can add a special route to your
instances that specifies the maximum MTU to use when communicating with hosts in that
address range. The instance-to-instance communication within the VCN still uses an MTU of
9000 bytes.
The following information shows how to set that route on a Linux instance.
The default route table on the instance typically has two routes: the default route (for the
default gateway), and a local route (for the local subnet). For example:
ip route show
default via 10.0.6.1 dev ens3
10.0.6.0/27 dev ens3 proto kernel scope link src 10.0.6.9
You can add another route that points to the same default gateway, but with the address range
of the on-premises network and a smaller MTU. For example, in the following command, the
on-premises network is 1.0.0.0/8, the default gateway is 10.0.6.1, and the maximum MTU
size is 1300 for packets being sent to the on-premises network.
ip route add 1.0.0.0/8 via 10.0.6.1 mtu 1300
Within the VCN, the instance-to-instance communication continues to use 9000 MTU.
However, communication to the on-premises network uses a maximum of 1300. This example
assumes there's no part of the connection between the on-premises network and VCN that
uses an MTU smaller than 1300.
Subnet Deletion
A subnet must be empty in order to delete it. In other words, it must contain no instances,
load balancers, or DB systems. You might not be able to see all the resources in a subnet. This
is because subnets can contain resources in multiple compartments, and you might not have
access to all the compartments. For example, the subnet might contain instances that your
team manages but also DB systems that another team manages.
Before you can delete a subnet, you must delete all the instances, load balancers, and DB
systems in the subnet. You might need to contact your tenancy administrator to help you
determine who owns the resources in the subnet.
If the subnet is empty when you try to delete it, its state changes to TERMINATING briefly and
then to TERMINATED. If it's not empty, you instead get an error indicating that there are still
resources that you must delete first.
With Object Storage, you can safely and securely store or retrieve data directly from the
internet or from within the cloud platform. Object Storage offers multiple management
interfaces that let you easily manage storage at scale. The elasticity of the platform lets you
start small and scale seamlessly, without experiencing any degradation in performance or
service reliability.
Object Storage is a regional service and is not tied to any specific compute instance. You can
access data from anywhere inside or outside the context of the Oracle Cloud Infrastructure, as
long you have internet connectivity and can access one of the Object Storage endpoints.
Authorization and resource limits are discussed later in this topic.
The following list summarizes some of the ways that you can use Object Storage.
You can use Object Storage as the primary data repository for big data. Object Storage
offers a scalable storage platform that lets you store large data sets and operate
seamlessly on those data sets. The HDFS connector provides connectivity to various big
data analytic engines like Apache Spark and MapReduce. This connectivity enables the
analytics engines to work directly with data stored in Object Storage. For more
information, see Hadoop Support.
BACKUP/ARCHIVE
You can use Object Storage to preserve backup and archive data that must be stored for
an extended duration to adhere to various compliance mandates.
CONTENT REPOSITORY
You can use Object Storage as your primary content repository for data, images, logs, and
video. You can reliably store and preserve this data for a long time, as well as serve this
content directly from Object Storage. The storage scales as your data storage needs
scale.
LOG DATA
You can use Object Storage to preserve application log data so that you can retroactively
analyze this data to determine usage pattern and/or debug issues.
You can use Object Storage to store generated application data that needs to be preserved
for future use. Pharmaceutical trials data, genome data, and Internet of Things (IoT) data
are examples of generated application data that you can preserve using Object Storage.
OBJECT
Any type of data, regardless of content type, is stored as an object. The object is
composed of the object itself and metadata about the object. Each object is stored in a
bucket.
BUCKET
A logical container for storing objects. Users or systems create buckets as needed. A
bucket is associated with a single compartment that has policies that determine what
actions a user can perform on a bucket and on all the objects in the bucket.
NAMESPACE
A logical entity that serves as a top-level container for all buckets and objects, allowing
you to control bucket naming within your tenancy. Each tenancy is provided one unique
and uneditable namespace that is global, spanning all compartments and regions. Bucket
names must be unique within a namespace, but can be repeated across different
namespaces. Within a namespace, buckets and objects exist in flat hierarchy, but you can
simulate a directory structure to help navigate a large set of objects (for example,
guitars/fender/stratocaster.jpg, guitars/gibson/lespaul.jpg).
COMPARTMENT
A collection of related resources that can be accessed only by those who are explicitly
granted access permission by an administrator. Compartments help you organize
resources and make it easier to control the access to those resources. Object Storage
automatically creates a root compartment when a compartment is provisioned. An
administrator can then create additional compartments within the root compartment and
add access rules for those compartments. A bucket can only exist in one compartment.
STRONG CONSISTENCY
When a read request is made, Object Storage always serves the most recent copy of the
data that was written to the system.
DURABILITY
Object Storage is a regional service and is available across all the availability domains
within a region. Data is stored redundantly across multiple storage servers and across
multiple availability domains. Object Storage actively monitors data integrity using
checksums and automatically detects and repairs corrupt data. Object Storage actively
monitors and ensures data redundancy. If a redundancy loss is detected, Object Storage
automatically creates additional data copies.
CUSTOM METADATA
You can define your own extensive metadata as key-value pairs for any purpose. For
example, you can create descriptive tags for objects, retrieve those tags, and sort
through the data.
ENCRYPTION
l Use the Standard Object Storage tier for data to which you need fast, immediate,
and frequent access. Data accessibility and performance justifies a higher price point to
store data in the Standard tier.
l Use the Archive Storage tier for data to which you seldom or rarely access, but that
must be retained and preserved for long periods of time. The cost efficiency of the
Archive Storage tier offsets the long lead time required to access the data. For more
information, see Archive Storage.
You interact with the data stored in either tier using the same bucket and object resources, as
well as the same management interfaces.
l For instructions on how to create a bucket and store an object in the bucket, see
"Putting Data into Object Storage" in the Oracle Cloud Infrastructure Getting Started
Guide.
l For task documentation related to buckets, see Managing Buckets.
l For task documentation related to objects, see Managing Objects.
An administrator in your organization needs to set up groups, compartments, and policies that
control which users can access which services, which resources, and the type of access. For
example, the policies control who can create new users, create and manage the cloud
network, launch instances, create buckets, download objects, etc. For more information, see
Getting Started with Policies. For specific details about writing policies for each of the
different services, see Policy Reference.
If you’re a regular user (not an administrator) who needs to use the Oracle Cloud
Infrastructure resources that your company owns, contact your administrator to set up a user
ID for you. The administrator can confirm which compartment or compartments you should be
using.
For administrators: The policy in Let Object Storage Admins Manage Buckets and Objects lets
the specified group do everything with buckets and objects.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for Object Storage, see Details for Object Storage.
Pre-Authenticated Requests
Pre-authenticated requests provide a way to let users access a bucket or an object without
having their own credentials, as long as the request creator has permissions to access those
objects. For example, you can create a request that lets a user upload backups to a bucket
without owning API keys. When you create a pre-authenticated request, a unique URL is
generated. Users in your organization, partners, or third parties can use this URL to access
the targets identified in the pre-authenticated request. For example, you can create a request
that lets an operations support user upload backups to a bucket without owning API keys. Or,
you can create a request that lets a business partner update shared data in a bucket without
owning API keys.
Permissions
Users who create or manage pre-authenticated requests require the PAR_MANAGE permission
access to the target bucket or object.
The user creating the pre-authenticated request must also have permission to perform the
action the pre-authenticated request is permitting. For example, the user creating a pre-
authenticated request for uploading an object must have both the PAR_MANAGE and the
OBJECT_CREATE permissions in the target compartment.
Options
l You can configure the name of the bucket that a user has write access to and can upload
an object or objects to.
l You can configure the name of an object that a user can read from, write to, or read
from and write to.
l You can configure the expiration date for the request.
l There is no hard limit on the number of pre-authenticated requests that you can create.
l You can't update a pre-authenticated request. If you want to change user access options
in response to changing requirements, you have to create a new pre-authenticated
request.
l A pre-authenticated request's target and actions are based on its creator's permissions.
The request is not, however, bound to the creator's account login credentials. A pre-
authenticated request is not affected when the creator's login credentials change.
You can create, delete, or list pre-authenticated requests using the Console, using the CLI, or
by using an SDK to access the API.
Public Buckets
By default, access to a bucket and its contents requires authentication and authorization.
However, Object Storage supports anonymous, unauthenticated access to a bucket. You make
a bucket public by enabling read access to the bucket.
Permissions
Options
l You can configure the access to allow listing and downloading bucket objects. List and
download access is the default.
l You can configure the access to allow downloading bucket objects only. Users would not
be able to list bucket contents.
l Changing the type of access is bi-directional. You can change a bucket's access from
public to private or from private to public.
l Changing the type of access doesn't affect existing pre-authenticated requests. Existing
pre-authenticated requests still work.
You can enable anonymous public access for new or existing buckets using the Console, CLI,
or an SDK to access the API.
After a request is created, the Pre-Authenticated Request Details shows the following
information:
To create a public bucket that allows listing and downloading bucket objects
oci os bucket create -ns <namespace> --name <bucket_name> --public-access-type ObjectRead -c
<compartment_ocid>
l CreatePreauthenticatedRequest
l DeletePreauthenticatedRequest
l GetPreauthenticatedRequest
l ListPreauthenticatedRequests
Managing Buckets
In the Oracle Cloud Infrastructure Object Storage service, a bucket is a container for storing
objects in a compartment within a namespace. A bucket is associated with a single
compartment. The compartment has policies that indicate what actions a user can perform on
a bucket and all the objects in the bucket.
For administrators: The policy Let Object Storage Admins Manage Buckets and Objects lets
the specified group do everything with buckets and objects.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for Object Storage, see Details for Object Storage.
Tagging Resources
You can apply tags to your resources to help you organize them according to your business
needs. You can apply tags at the time you create a resource, or you can update the resource
later with the desired tags. For general information about applying tags, see Resource Tags.
Pre-Authenticated Requests
Pre-authenticated requests provide a way to let users access a bucket or an object without
having their own credentials, as long as the request creator has permissions to access those
objects. For example, you can create a request that lets a user upload backups to a bucket
without owning API keys. For more information, see Pre-Authenticated Requests.
Public Buckets
When you create a bucket, the bucket is considered a private bucket and the access to the
bucket and its contents requires authentication. However, Object Storage supports
anonymous, unauthenticated access to a bucket. For more information, see Public Buckets.
l Bucket names can contain from 1 to 256 characters. Avoid using confidential
information.
l Valid characters are letters (upper or lower case), numbers, dashes, underscores, and
periods.
l Bucket names must be unique within the namespace.
l Bucket names are prefixed with the namespace:
n/<namespace>/b/<bucket>
For example:
n/company_ABC/b/event-photos
A list of the buckets in the compartment you're viewing is displayed. If you don’t see the one
you're looking for, make sure you’re viewing the correct compartment (select from the list on
the left side of the page).
To create a bucket
1. Open the Console, click Storage, and then click Object Storage.
A list of the buckets in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
2. Choose the compartment you want to a create bucket in.
A list of buckets that have already been created is displayed.
3. Click Create Bucket.
4. In the Create Bucket dialog, specify the attributes of the bucket:
l Name: Required. A user-friendly name or description. Avoid using confidential
information.
l Storage Tier: Select the tier in which you want to store your data. Available tiers
include:
o Standard is the default primary Object Storage tier for storing data to
which you need fast, immediate, and frequent access.
o Archive is a special tier for storing data that is accessed infrequently and
requires long retention periods. For more information, see "Archive
Storage" in the Oracle Cloud Infrastructure User Guide.
l Tags: Optionally, you can apply tags. If you have permissions to create a
resource, you also have permissions to apply free-form tags to that resource. To
apply a defined tag, you must have permissions to use the tag namespace. For
more information about tagging, see Resource Tags. If you are not sure if you
should apply tags, skip this option (you can apply tags later) or ask your
administrator.
5. Click Create Bucket.
The bucket is created immediately and you can add objects to it.
To delete a bucket
You can permanently delete an empty bucket. The bucket cannot contain any objects. See To
delete an object from a bucket if you must delete existing objects before deleting a bucket.
1. Open the Console, click Storage, and then click Object Storage.
A list of the buckets in the compartment you're viewing is displayed. If you don’t see the
one you're looking for, make sure you’re viewing the correct compartment (select from
the list on the left side of the page).
2. Find the bucket that you want to delete.
3. Click the Actions icon ( ), and then click Delete.
4. Confirm when prompted.
By default, a bucket is created in the Standard tier. You can, but do not need to set --
storage-tier explicitly. For example:
oci os bucket create -ns <your_namespace> --name <bucket_name> --compartment-id <target_compartment_
id>
A Standard tier bucket is created immediately and you can add objects to it.
To create an Archive tier bucket, you must explicitly set --storage-tier. For example:
oci os bucket create -ns <your_namespace> --name <archivebucket_name> --compartment-id <target_
compartment_id> --storage-tier Archive
An Archive bucket is created immediately and you can add objects to it.
To delete a bucket
You can permanently delete an empty bucket. The bucket cannot contain any objects. For
more information, see To delete an object from a bucket.
Open a command prompt and run oci os bucket delete to delete a bucket. For example:
oci os bucket delete -ns <your_namespace> --name <bucket_name>
l CreateBucket
l DeleteBucket
l GetBucket
l HeadBucket
l ListBuckets
l UpdateBucket
Managing Objects
In the Oracle Cloud Infrastructure Object Storage service, an object is a file or unstructured
data you upload to a bucket within a compartment within a namespace. The object can be any
type of data, for example, multimedia files, data backups, static web content, or logs. You can
store objects up to 10 TiB in size. Objects are processed as a single entity. You can't edit or
append data to an object, but you can replace the entire object. This topic describes how to
manage objects.
For administrators: The policy Let Object Storage Admins Manage Buckets and Objects lets
the specified group do everything with buckets and objects. Objects always reside in the same
compartment as the bucket.
If you need to write a more restrictive policy for objects, the inspect objects lets you list
all the objects in a bucket and do a HEAD operation for a particular object. In comparison,
read objects lets you download the object itself.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for buckets and objects, see Details for Object Storage.
Pre-Authenticated Requests
Pre-authenticated requests provide a way to let users access a bucket or object without
having their own credentials. For example, you can create a request that lets a user upload
backups to a bucket without owning API keys. Pre-authenticated requests have the following
scope and constraints:
For more information, see Pre-Authenticated Requests in Managing Access to Buckets and
Objects.
/n/<namespace>/b/<bucket>/o/<object_name>
For example:
/n/Company_ABC/b/event_photos/o/marathon/5k/participant193.jpg
The object name is everything after the /o/. Within a namespace, buckets and objects
exist in a flat hierarchy, but you can simulate a directory structure using the slash /
delimiter to add hierarchy to an object name. Doing so lets you list one directory at a
time, which is helpful when navigating a large set of objects.
l Object names must be Unicode characters for which the UTF-8 encoding does not
exceed 1024 bytes. Clients are responsible for URL-encoding characters.
2. Choose the compartment that contains the bucket that contains your object.
A list of buckets is displayed.
3. Click the bucket name that contains your object.
A list of objects in the bucket is displayed.
4. For the object you want to download, click the Actions icon ( ), and then click
Download.
To rename an object
1. Open the Console, click Storage, and then click Object Storage.
2. Choose the compartment that contains the bucket that contains your object.
A list of buckets is displayed.
3. Click the bucket name that contains your object.
A list of objects in the bucket is displayed.
4. For the object you want to edit, click the Actions icon ( ), and then click Edit.
5. In the Edit Object dialog, provide the new name for the object and an optional
delimited directory structure prefix. For example, participant194.jpg or
/marathon/5k/participant194.jpg.
Depending on the size of the object, it can take 4 or more hours to restore an object from
Archive Storage. You cannot download an item until the item is fully restored. Once restored,
you have 24 hours to download the object. After 24 hours, the object is once again archived.
1. Open the Console, click Storage, and then click Object Storage.
2. Choose the compartment that contains the bucket that contains your object.
A list of buckets is displayed.
3. Find the bucket that contains the object you want to delete and click the bucket name.
A list of objects in the bucket is displayed.
4. For the object you want to delete, click the Actions icon ( ), and then click Delete.
5. Confirm when prompted.
An object can be uploaded as a single part or as multiple parts. Here we describe a single part
upload. For information on multipart uploads, see Managing Multipart Uploads
For example:
oci os object put -ns <your_namespace> -bn <bucket_name> --name <file_name> --file <file_location> --
no-multipart
For example:
oci os object get -ns <your_namespace>-bn <bucket_name> --name <file_name> --file <file_location>
To rename an object
Open a command prompt and run oci os object rename to rename an object.
For example:
oci os object rename -ns <your_namespace> -bn <bucket_name> --name <original_file_name> --new-name
<new_file_name>
You can specify the file's directory location with the command's "--name" and "--new-name"
flags, though you are not required to do so. For example:
oci os object rename -ns company_abc -bn photo_collection --name /marathon/5k/participant193.jpg --new-
name /marathon/5k/participant194.jpg
For example:
oci os object restore -ns <your_namespace> --name <archive_bucket_name> --name <archived_file_name>
Use the oci os object head to check the status of restoring an object from Archive Storage.
After the file is restored, you have 24 hours to download the file.
For example:
Check the archival-state field. When the archival-state displays Restored, you have 24
hours to download the file.
To delete an object
You can permanently delete an object. Open a command prompt and run oci os object
delete to delete an object. For example:
oci os object delete -ns <your_namespace> -bn <bucket_name> --name <file_name>
l DeleteObject
l GetObject
l HeadObject
l ListObjects
l PutObject
l RenameObject
l RestoreObjects
uploading. Multipart uploads can also minimize the impact of network failures by letting you
retry a failed part upload instead of requiring you to retry an entire object upload.
Multipart uploads also accommodate objects that are too large for a single upload operation.
Oracle recommends you perform a multipart upload to upload objects larger than 100 MiB.
The maximum size for an uploaded object is 10 TiB. Object parts must be no larger than 50
GiB. For very large uploads, a multipart upload also offers you the flexibility of pausing and
resuming at your own pace.
1. Initiating an upload
2. Uploading object parts
3. Committing the upload
Prior to beginning a multipart upload, you are responsible for creating the parts to upload.
Object Storage provides API operations for the remaining steps. The service also provides API
operations for listing in-progress multipart uploads, listing the object parts in an in-progress
multipart upload, and aborting in-progress multipart uploads.
You can use multipart upload API calls or the Java Software Development Kit (SDK) to
manage multipart uploads, but not the Console. This topic describes the steps at a high level,
but you can refer to the API Reference for specifics about supported API calls.
Initiating an Upload
After you finish creating object parts, initiate a multipart upload by making a
CreateMultipartUpload REST API call. Provide the object name and any object metadata.
Object Storage responds with a unique upload ID that you must include in any requests
related to this multipart upload. Object Storage also marks the upload as active. The upload
remains active until you explicitly commit it or abort it.
Object Storage returns an ETag value for each part uploaded. You need both the part number
and corresponding ETag value for each part when you commit the upload.
In the event of network issues, you can restart a failed upload for an individual part. You do
not need to restart the entire upload. If, for some reason, you cannot perform an upload all at
once, multipart upload lets you continue uploading parts at your own pace. While a multipart
upload is still active, you can keep adding parts as long as the total number is less than
10,000.
You can keep track of an active multipart upload by listing all parts that have been uploaded.
(You cannot list information for an individual object part in an active multipart upload.) The
ListMultipartUploadParts operation requires the namespace, bucket name, and upload ID.
Object Storage will respond with information about the parts associated with the specified
upload ID. Parts information includes the part number, ETag value, MD5 hash, and part size
(in bytes).
Similarly, you can see what uploads are in-progress if you have multiple multipart uploads
occurring simultaneously. Make an ListMultipartUploads API call to list active multipart
uploads in the specified namespace and bucket.
Charges for parts storage begin accruing as soon as you upload data.
You cannot list or retrieve parts from a completed upload. You cannot append or remove parts
from the completed upload either. If you want, you can replace the object by initiating a new
upload.
If you decide to abort a multipart upload instead of committing it, wait for in-progress part
uploads to complete and then use the AbortMultipartUpload operation. If you abort an upload
while part uploads are still in progress anyway, Object Storage cleans up both completed and
in-progress parts. Upload IDs from aborted multipart uploads cannot be reused.
For administrators: The policy in Let Object Storage Admins Manage Buckets and Objects lets
the specified group do everything with buckets and objects.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for Object Storage, see Details for Object Storage.
l AbortMultipartUpload
l CommitMultipartUpload
l CreateMultipartUpload
l ListMultipartUploadParts
l ListMultipartUploads
l UploadPart
In addition to the native Object Storage APIs, Object Storage provides API support for both
Amazon S3 Compatibility API and Swift API. However these APIs do not understand the Oracle
Cloud Infrastructure concept of a compartment. By default, buckets created using the Amazon
S3 Compatibility API or the Swift API are created in the root compartment of the Oracle Cloud
Infrastructure tenancy. Instead, you can designate a different compartment for the Amazon
S3 Compatibility API or Swift API to create buckets in.
When you designate a different compartment to use for the Amazon S3 Compatibility API or
Swift API, any new buckets you create using the Amazon S3 Compatibility API or the Swift API
are created in this newly designated compartment. Buckets previously created in a different
compartment are not automatically moved to the newly designated compartment. See
Managing Buckets if you want to move previously created buckets to this newly designated
compartment.
Compartments have policies that indicate what actions a user can perform on a bucket and all
the objects in the bucket.
For administrators:
l To change the default compartments for Amazon S3 Compatibility API and Swift API, a
user must belong to a group with NAMESPACE_UPDATE permissions.
l To see the current default compartments for Amazon S3 Compatibility API and Swift
API, a user must belong to a group with NAMESPACE_READ permissions.
l To move a bucket to a different compartment, a user must belong to a group with
BUCKET_UPDATE and BUCKET_CREATE permissions in the source compartment, and
BUCKET_CREATE permissions in the target compartment.
If you're new to policies, see Getting Started with Policies and Common Policies. If you want
to dig deeper into writing policies for buckets and objects, see Details for Object Storage.
Your default compartment designations for the APIs are listed under Object Storage
Settings.
For example:
oci os ns get-metadata --namespace <your_namespace>
For example:
oci os ns update-metadata --namespace <your_namespace> --default-s3-compartment-id <your_oci_
compartment_id>
For example:
oci os ns update-metadata --namespace <your_namespace> --default-swift-compartment-id <your_oci_
compartment_id>
Use the following operation to get your default Amazon S3 Compatibility API and Swift API
compartment designations, and change those compartment designations:
l GetNamespaceMetadata
l UpdateNamespaceMetadata
The following highlights the differences between the two storage technologies:
l Compartments
Although Amazon S3 doesn't use compartments, any buckets created using the Amazon
S3 Compatibility API are created in the root compartment of the Oracle Cloud
Infrastructure tenancy.
l Global bucket namespace
Object Storage doesn't use a global bucket namespace. Bucket names must be unique
within the context of a namespace, but bucket names can be repeated across
namespaces. Each tenant is associated with one default namespace that spans all
compartments.
l Encryption
The Oracle Cloud Infrastructure Object Storage service encrypts all data at rest by
default. Encryption can't be turned on or off using the API.
l Object Level Access Control Lists (ACLs)
Oracle Cloud Infrastructure does not use ACLs for objects. Instead, IAM policies are
used to manage access to compartments, buckets, and objects.
Bucket APIs
l DeleteBucket
l GetLocation
l HeadBucket
l ListObjects
l PutBucket
Object APIs
l DeleteObject
l GetObject
l HeadObject
l PutObject
Tagging APIs
l DeleteBucketTagging
l GetBucketTagging
l PutBucketTagging
l Create a Oracle Cloud Infrastructure tenant. See Signing Up for Oracle Cloud
Infrastructure.
l Create an Amazon S3 Compatibility API key. An Amazon S3 Compatibility API key
consists of an Access Key/Secret Key pair.
After you've generated the necessary key, you can use the Amazon S3 Compatibility API to
access Object Storage in Oracle Cloud Infrastructure. For more information, see the API
Reference.
Hadoop Support
Using the HDFS connector, you can run Hadoop or Spark jobs against data in the Oracle Cloud
Infrastructure Object Storage service. The connector has the following features:
For information about downloading, configuring, and using the HDFS connector, see HDFS
Connector for Object Storage.
Archive Storage
The Archive Storage tier is ideal for storing data that is accessed infrequently and requires
long retention periods. The Archive Storage tier is more cost effective than the Standard
Object Storage tier for preserving cold data for:
Unlike the Standard Object Storage tier, Archive Storage data retrieval is not instantaneous.
Buckets
You decide which storage tier is appropriate for your data when you initially create the Object
Storage bucket container for your data. The storage tier is expressed as a property of the
bucket. Once set, however, you cannot change the storage tier property for a bucket:
l An existing Standard tier bucket cannot be downgraded to the Archive Storage tier.
l An Archive Storage tier bucket cannot be upgraded to the Standard Object Storage tier.
In addition to the inability to change the storage tier designation, there are other reasons why
storage tier selection requires careful consideration:
l The minimum retention requirement for Archive Storage is 90 days. If you delete data
from Archive Storage before the minimum retention requirements are met, you are
charged a deletion penalty. The deletion penalty is the prorated cost of storing the data
for the full 90 days.
l While Archive Storage is more cost effective than Standard Object Storage for cold
storage, you need to understand that when you restore objects, you are returning them
to the Standard Object Storage tier and will be billed for that storage service class while
the objects reside there.
See Managing Buckets for detailed instructions on creating an Archive Storage tier bucket.
Objects
You upload objects to an Archive Storage bucket the same way you upload objects to a
Standard Object Storage bucket. The difference is that when you upload an object to an
Archive Storage bucket, the object is archived and you can no longer download an object
without first restoring it.
Archived objects are displayed in the object listing of a bucket. You can also display the
details of each object.
See Managing Objects for detailed instructions on uploading objects to an Archive Storage
bucket.
To download an object from Archive Storage, you must first restore the object. It takes about
four hours from the time an Archive Storage restore request is made, to the time the first
byte of data is retrieved (retrieval time metric is measured by Time To First Byte (TTFB)).
How long the full restoration takes, depends on the size of the object. You can determine the
status of the restoration by looking at the object Details. Once the status shows as
Restored, you can then download the object.
After an object is restored, you have a 24-hour window to download the object. You can find
out how much of the 24-hour download time is remaining by looking at Available for
Download in object Details. After 24 hours, the object returns to Archive Storage. You
always have access to an object's metadata, regardless of whether the object is in an
archived or restored state.
See Managing Objects for detailed instructions on restoring, checking status of, and
downloading Archive Storage objects.
WORM Compliance
You can achieve WORM compliance with Archive Storage by applying IAM policy permissions
so that data once written, cannot be overwritten.
See Managing Access to Buckets and Objects for more information. If you're new to policies,
see Getting Started with Policies and Common Policies.
The namespace metadata stores the default compartment assignments for the Amazon S3
Compatibility API and the Swift API. For more information, see Viewing and Specifying
Designated Compartments.
Open a command prompt and run the following command to get your namespace:
oci os ns get
CLI Architecture
The CLI is built on Python (version 2.7.5 or 3.5 or later), running on Mac, Windows, or Linux.
The Python code makes calls to Oracle Cloud Infrastructure APIs to provide the functionality
implemented for the various services.
These APIs are typical REST APIs that use HTTPS requests
and responses. For more information, see About the API
CLI Help
The command line help is derived from the APIs and help text in the Python source code. You
can get help for a specific command on the command line and view all the command line help
as a text file in your browser.
l Services supported:
o Audit
o Core Services (Networking, Compute, Block Volume)
o Database
o IAM
o Object Storage
o Load Balancing
l Licensing: This CLI and sample is dual licensed under the Universal Permissive License
1.0 and the Apache License 2.0; third-party content is separately licensed as described
in the code.
REQUIREMENTS
policy, see Adding Users. For a list of other typical Oracle Cloud Infrastructure policies,
see Common Policies.
l A keypair used for signing API requests, with the public key uploaded to Oracle. Only
the user calling the API should possess the private key. See Configuring the CLI.
l Python version 2.7.5 or 3.5 or later, running on Mac, Windows, or Linux. Note that if you
use the CLI Installer and do not have Python on your machine, the Installer offers to
automatically install Python for you. If you already have Python installed on your
machine, you can use the python --version command to find out which version is
installed.
I NSTALLATION OPTIONS
There are two ways to install the CLI and you can use the one that is best suited for your
environment. Use either of the following options:
l Automatically installing the CLI and dependencies with the CLI installer
l Manually installing the CLI and dependencies
The installer uses a script to install the CLI and programs that are required. You can use the
installer to eliminate most of the manual steps to install the CLI. The installer script:
l Installs Python
o During installation, you are prompted to provide a location for installing the
binaries and executables. If Python is not installed on your computer, or the
installed version of Python is incompatible with the CLI, you are prompted to
install Python.
o The installer doesn't try to install Python on a MacOS computer. However, the
script notifies you if the version of Python on the computer is too old or
incompatible.
l Installs virtualenv and creates a virtual environment
l Installs the latest version of the CLI
This section describes how to install the CLI by following the sequence of steps that are
provided.
INSTALLING PYTHON
W INDOWS
Install Python from the Python Windows downloads page. During installation, choose to add
Python to the PATH and/or environment variables (depending on the prompt).
To install Python on this version of Windows Server you must install Python 2.7, or install
Windows 2008 R2 Service Pack 1 (SP1) to the instance before installing later versions of
Python.
ORACLE LINUX
Some versions of Oracle Linux come with incompatible versions of Python, and may require
additional components to install the CLI.
Before you install the CLI, run the following command on a new Oracle Linux 7.3 image.
sudo yum install gcc libffi-devel python-devel openssl-devel
sudo easy_install pip
ORACLE LINUX 6
On Oracle Linux 6, a newer version of Python is usually required. You can install a newer
version alongside the existing version by downloading from Python, and then install the CLI in
a virtual environment that uses the new version. To install the new version of Python, run the
following commands.
sudo yum install gcc libffi-devel python-devel openssl-devel
sudo easy_install pip
curl -O https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tgz
tar -xvzf Python-3.6.0.tgz
cd Python-3.6.0
./configure
make
sudo make install
Before you install the CLI, run the following commands on a new CentOS image.
sudo yum install gcc libffi-devel python-devel openssl-devel
sudo easy_install pip
Python should come installed. However, if you need to install Python, use the OS's normal
package manager.
Running the CLI in a virtual environment is optional. However, Oracle recommends running
the CLI in a virtualenv environment to isolate Python dependencies. If you want to run the
CLI in a virtual environment, use the following installation sequence:
If you don’t create a virtualenv environment first, you have to run sudo pip install oci_
cli. This is required to avoid permissions errors caused by trying to update packages in the
system python installation.
virtualenv is a virtual environment builder that lets you create isolated Python
environments. First, you have to download and install virtualenv and then install the CLI in
the virtual environment. (For Linux users, virtualenv is usually in a separate package from
the main Python package.)
3. (Optional) To create a directory for storing your virtual environments, run the following
command.
mkdir -p myvirtualspaces/virtualenvs
4. To create a new virtual environment without any packages, run the following command.
virtualenv myvirtualspaces/virtualenvs/cli-testing --no-site-packages
If you're installing a newer version of Python to run alongside an existing version, you can
create a virtual environment that uses the new version.
To reference the new version of Python, run the following command with the -p parameter.
virtualenv -p /usr/local/bin/python3.6 cli-testing
Use the following steps to install the CLI in a virtual environment or on a standard system.
You can download the CLI from GitHub or install the package from Python Package Index
(PyPI).
Your operating system determines which commands are used to start a CLI session in a
virtual environment.
1. Open a terminal.
2. Change the working directory.
cd myvirtualspaces/virtualenvs/cli-testing/bin
Windows Users
To start a CLI session, run the following commands.
To stop using the CLI, run the following command from the command line.
deactivate
Before using the CLI, you have to create a config file that contains the required credentials for
working with Oracle Cloud Infrastructure. You can create this file using a setup dialog or
manually, using a text editor.
To have the CLI walk you through the first-time setup process, step by step, use the oci
setup config command. The command prompts you for the information required for the
config file and the API public/private keys. The setup dialog generates an API key pair and
creates the config file.
MANUAL S ETUP
If you want to set up the API public/private keys yourself, and write your own config file, see
SDK and Tool Configuration.
The CLI supports using a file for CLI-specific configurations. You can:
The default location and file name is ~/.oci/oci_cli_rc. You can also explicitly specify this
file with the --cli-rc-file option or by with the legacy --defaults-file option. For
example:
# Uses the file from ~/.oci/oci_cli_rc
oci os bucket list
The preceding command creates the file you specify that includes examples of default
command aliases, parameter aliases, and named queries.
Specify a default profile in the OCI_CLI_SETTINGS section of the CLI configuration file. The
next example shows how to specify a default profile named IAD. The CLI looks for a profile
named IAD in your ~/.oci/config file, or any other file that you specify using the --config-
file option.
[OCI_CLI_SETTINGS]
default_profile=IAD
You can also specify a default value for the --profile option using the OCI_CLI_PROFILE
environment variable.
If a default profile value has been specified in multiple locations, the order of precedence is:
The CLI supports using a default values file so that you don't have to keep typing them into the
Default values are treated hierarchically, with specific values having a higher order of
precedence than general values. For example, if there is a globally defined value for
compartment-id and a specific compartment-id defined for the compute instance launch
command, the CLI uses the value for the compute instance launch instead of the global
default.
When you start the CLI, the program looks for the default values file in ~/.oci/--cli-rc-
file. You can also specify a different file and location using the --defaults-file option. For
example:
# Uses the default values file from ~/.oci/--cli-rc-file
oci os bucket list
If a value is provided on the command line also exists in --cli-rc-file, the value from the
command line has priority. For a command with options that take multiple values, the values
are taken entirely from the command line or from --cli-rc-file. The 2 sources aren't
merged.
The --cli-rc-file file can be divided into different sections with one or more keys per
section.
S ECTIONS
In the next example, the file has two sections, with a key in each section. To specify which
section to use, you use the --profile option in the CLI.
[DEFAULT]
compartment-id = ocid1.compartment.oc1..aaaaaaaal5zx25nzpgeyqd3gzijdlg3ieqeyrggnx7il26astxxhqoljnhwa
[ANOTHER_SECTION]
compartment-id = ocid1.compartment.oc1..aaaaaaaal3gzieyqdrggnx7xil26astxxhqol2pgjjdlieqeyg35nz5znhwa
KEYS
Keys are named after command line options, but do not use a leading double hyphen (--). For
example, the key for --image-id is image-id. You can specify keys for single values,
multiple values, and flags.
l Keys for Single Values. The next example shows how to specify key values at different
levels, and with different scope.
[DEFAULT]
# Defines a global default for bucket-name
bucket-name = my-global-default-bucket-name
# Defines a default for bucket-name, which applies to all 'os object' commands (e.g., os object
get)
os.object.bucket-name = bucket-name-for-object-commands
# Defines a default for bucket-name, for the 'os object multipart list' command
os.object.multipart.list.bucket-name = bucket-name-for-multipart-list
l Keys for Multiple Values. Some options, such as --include and --exclude on the oci
os object bulk-upload command can be specified more than once. For example:
oci os object bulk-upload -ns my-namespace -bn my-bucket --src-dir my-directory --include *.txt -
-include *.png
The next example shows how you would enter the --include values in the --cli-rc-
file file
[DEFAULT]
os.object.bulk-upload.include =
*.txt
*.png
In the previous example, one value is given for each line and each line must be indented
underneath its key. You can use tabs or spaces and the amount of indentation doesn't
matter. You can also put a value on the same line as the key, add more values on the
following lines, and use a path statement for a value. For example:
[DEFAULT]
os.object.bulk-upload.include = *.pdf
*.txt
*.png
my-subfolder/*.tiff
l Keys for Flags. Some command options are flags, like --force, which uses a Boolean
value. To set a flag for the --force option, use the following command.
os.object.delete.force=true
# Command examples:
# oci os object ls or oci os compute ls
# This is a command sequence alias that lets you use "oci os object rm" instead of "oci os
# object delete".
# Command example:
# <alias> = rm, <sequence of groups and sub-groups> = os object, <command or group to alias> = delete
If you want to define default values for options in your CLI configuration file, you can use the
alias names you have defined. For example, if you have -ls as an alias for --list, you can
define a default for an availability domain when listing instances by using the following
command.
[DEFAULT]
compute.instance.ls.compartment-
id=ocid1.compartment.oc1..aaaaaaaal5zx25nzpgeyqd3gzijdlg3ieqeyrggnx7il26astxxhqoljnhwa
Specify option aliases in the OCI_CLI_PARAM_ALIASES section of the CLI configuration file.
Option aliases are applied globally. The following example shows some aliases for command
options.
[OCI_CLI_PARAM_ALIASES]
# Option aliases either start with a double hyphen (--) or are a single hyphen (-) followed by a #
single letter. For example: --foo, -f
#
--ad = --availability-domain
--dn = --display-name
--egress-rules = --egress-security-rules
--ingress-rules = --ingress-security-rules
If you want to define default values for options in your CLI configuration file, you can use the
alias names you have defined. For example, if you have -ad as an alias for --availability-
domain, you can define a default for an availability domain when listing instances by using the
following command.
[DEFAULT]
compute.instance.list.ad=xyx:PHX-AD-1
If you use the --query parameter to filter or manipulate output, you can define named
queries instead of using a JMESPath expression on the command line.
Specify named queries in the OCI_CLI_CANNED_QUERIES section of the CLI configuration file.
# Filters where the display name contains some text and pull out certain attributes(id and time-
created)
#
filter_by_display_name_contains_text_and_get_attributes=data[?contains("display-name", `your_text_
here`)].{id: id, timeCreated: "time-created"}
get_top_5_results=data[:5]
You can reference any of these queries using this syntax: query://<query name>.
For example, to get id and display name from a list, run the following command.
oci compute instance list -c $C --query query://get_id_and_display_name_from_list
Enabling Auto-complete
If you used the CLI installer, you don't have to configure auto-complete because it's enabled
automatically.
To enable auto-complete (tab completion) for a manual CLI installation, run the following
command.
oci setup autocomplete
This section provides information about command syntax, getting help, and managing input or
output.
Most commands must specify a service, followed by a resource type and then an action. The
basic command line syntax is:
oci <service> <type> <action> <options>
The following command to launch an instance shows a typical command line construct.
oci compute instance launch --availability-domain "EMIr:PHX-AD-1" -c
ocid1.compartment.oc1..aaaaaaaal3gzijdlieqeyg35nz5zxil26astxxhqol2pgeyqdrggnx7jnhwa --shape
"VM.Standard1.1" --display-name "Instance 1 for sandbox" --image-id
ocid1.image.oc1.phx.aaaaaaaaqutj4qjxihpl4mboabsa27mrpusygv6gurp47kat5z7vljmq3puq --subnet-id
ocid1.subnet.oc1.phx.aaaaaaaaypsr25bzjmjyn6xwgkcrgxd3dbhiha6lodzus3gafscirbhj5bpa
You can get help for any command using --help, -h, or -?. For example:
oci --help
oci os bucket -h
You can view all the command line help as a text file in your browser.
To get the installed version of the CLI, run the following command.
oci --version
l Date Only (This date will be taken as midnight UTC of that day)
Format: YYYY-MM-DD, Example: 2017-09-15
l Epoch seconds
Example: 1412195400
The CLI provides several options for managing command input and output.
Complex input, such as arrays and objects with more than one value, are passed in JSON
format and can be provided as a string at the command line, as a file, or as a command line
string and as a file.
The following command, run on a MacOS or Linux machine, shows 2 values getting passed for
the --metadata object.
oci os bucket create -ns mynamespace --name mybucket --metadata '{"key1":"value1","key2":"value2"}' --
compartment-id ocid1.compartment.oc1..aaaaaaaarhifmvrvuqtye5q66rck6copzqck3ukc5fldrwpp2jojdcypxfga
On a Windows machine, you can also pass complex input as a JSON string, but you must
escape any strings within the JSON string. In the next example the escape characters are
highlighted in boldface.
oci os bucket create -ns mynamespace --name mybucket --metadata '
{\"key1\":\"value1\",\"key2\":\"value2\"}' --compartment-id
ocid1.compartment.oc1..aaaaaaaarhifmvrvuqtye5q66rck6copzqck3ukc5fldrwpp2jojdcypxfga
JSON Errors
For more information about using JSON strings, see Advanced JSON Options
By default, all responses to a command are returned in JSON format. For example, the
following response is returned when you issue the command to get a list of regions.
{
"data": [
{
"key": "FRA",
"name": "eu-frankfurt-1"
},
{
"key": "IAD",
"name": "us-ashburn-1"
},
{
"key": "PHX",
"name": "us-phoenix-1"
}
]
}
In some cases, readability can become an issue, which is easily resolved by formatting a
response as a table. To get a response to a command formatted as a table, run the following
command.
oci iam region list --output table
FILTER OUTPUT
You can filter output using the JMESPath query option for JSON. Filtering is very useful when
dealing with large amounts of output. For example, run the following command with the
output table option to get a list of images.
The image information is returned in table format, but too much data is returned, which
overflows the width of the terminal. In addition, you may not need all the information that's
returned.
| base-image-id | compartment-id | create-image-allowed | display-name
| id | lifecycle-state | operating-system | operating-system-version | time-created
|
+---------------+----------------+----------------------+-----------------------------------------------
----------+----------------------------------------------------------------------------------+----------
-------+------------------+--------------------------+----------------------------------+| None
| None | True | Windows-Server-2012-R2-Standard-Edition-VM-2017.07.25-0 | ocid
1.image.oc1.phx.aaaaaaaab2xgy6bijtudhsgsbgns6zwfqnkdb2bp4l4qap7e4mehv6bv3qca | AVAILABLE | Windows
| Serv
er 2012 R2 Standard | 2017-07-25T23:59:59.311000+00:00 |
| None | None | True | Windows-Server-2012-R2-Standard-Edition-VM-
2017.04.03-0 | ocid
1.image.oc1.phx.aaaaaaaa53cliasgvqmutflwqkafbro2y4ywjebci5szc4eus5byy2e2b7ua | AVAILABLE | Windows
| Serv
er 2012 R2 Standard | 2017-04-03T19:42:22.938000+00:00 |
| None | None | True | Windows-Server-2012-R2-Standard-Edition-BM-
2017.07.25-0 | ocid
1.image.oc1.phx.aaaaaaaadcegaay43eux6uap55fhp6lqaqh37xgocscktwm2yr7ql4pcykxq | AVAILABLE | Windows
| Serv
er 2012 R2 Standard | 2017-07-25T20:55:37.937000+00:00 |
| None | None | True | Windows-Server-2012-R2-Standard-Edition-BM-
2017.04.13-0 | ocid1.image.oc1.phx.aaaaaaaa7xgecq2kt7tikqfrmshu6gwukoc3lcnf2iqtwmjyarlprp6j6lna |
AVAILABLE | Windows | Serv
er 2012 R2 Standard | 2017-04-13T17:36:50.840000+00:00 |
| None | None | True | Windows-Server-2008-R2-Standard-Edition-VM-
2017.08.03-0 | ocid
1.image.oc1.phx.aaaaaaaaejmyrf52wf2blf7jd7y2dcrjvg6dyulfyp7d3r3oarc5ayka5liq | AVAILABLE | Windows
| Serv
er 2008 R2 Standard | 2017-07-27T18:19:06.976000+00:00 |
| None | None | True | Oracle-Linux-7.4-2017.09.29-0
| ocid
1.image.oc1.phx.aaaaaaaa3g2xpzlbrrdknqcjtzv2tvxcofjc55vdcmpxdlbohmtt7encpana | AVAILABLE | Oracle
Linux | 7.4
| 2017-10-05T22:36:17.246000+00:00 |
| None | None | True | Oracle-Linux-7.4-2017.08.25-1
| ocid
You can limit the amount of data returned by combining the --query option with --output
table to get the information you want from a command.
To get filtered image information returned in a table format, run the following command.
oci compute image list -c
ocid1.compartment.oc1..aaaaaaaapxgklgmujxjzx2ypptfjrcieq7rrob2u2zbesh3wlafsgthhqtea --output table --
query "data [*].{ImageName:\"display-name\", OCID:id}"
The previous command returns the following image information, formatted as a two column
table.
+---------------------------------------------------------+---------------------------------------------
-------------------------------------+
| ImageName | OCID
|
+---------------------------------------------------------+---------------------------------------------
-------------------------------------+
| Windows-Server-2012-R2-Standard-Edition-VM-2017.07.25-0 |
ocid1.image.oc1.phx.aaaaaaaab2xgy6bijtudhsgsbgns6zwfqnkdb2bp4l4qap7e4mehv6bv3qca |
| Windows-Server-2012-R2-Standard-Edition-VM-2017.04.03-0 |
ocid1.image.oc1.phx.aaaaaaaa53cliasgvqmutflwqkafbro2y4ywjebci5szc4eus5byy2e2b7ua |
| Windows-Server-2012-R2-Standard-Edition-BM-2017.07.25-0 |
ocid1.image.oc1.phx.aaaaaaaadcegaay43eux6uap55fhp6lqaqh37xgocscktwm2yr7ql4pcykxq |
| Windows-Server-2012-R2-Standard-Edition-BM-2017.04.13-0 |
ocid1.image.oc1.phx.aaaaaaaa7xgecq2kt7tikqfrmshu6gwukoc3lcnf2iqtwmjyarlprp6j6lna |
| Windows-Server-2008-R2-Standard-Edition-VM-2017.08.03-0 |
ocid1.image.oc1.phx.aaaaaaaaejmyrf52wf2blf7jd7y2dcrjvg6dyulfyp7d3r3oarc5ayka5liq |
| Oracle-Linux-7.4-2017.09.29-0 |
ocid1.image.oc1.phx.aaaaaaaa3g2xpzlbrrdknqcjtzv2tvxcofjc55vdcmpxdlbohmtt7encpana |
| Oracle-Linux-7.4-2017.08.25-1 |
ocid1.image.oc1.phx.aaaaaaaajan2cd2g65tphpaiegiz4lbs422rdc73okcu7dt2uya6p5szywsa |
| Oracle-Linux-7.4-2017.08.25-0 |
ocid1.image.oc1.phx.aaaaaaaabifl2bmaygtu4riw3vcuowl5cqwdzdqzwndqneoybcfcn2pgyc6a |
| Oracle-Linux-7.3-2017.07.17-1 |
ocid1.image.oc1.phx.aaaaaaaa7jvfm572d4ehcgh3ijapvhrt52voel33ispumnygi3kl7mph55ha |
| Oracle-Linux-7.3-2017.07.17-0 |
ocid1.image.oc1.phx.aaaaaaaa5yu6pw3riqtuhxzov7fdngi4tsteganmao54nq3pyxu3hxcuzmoa |
| Oracle-Linux-6.9-2017.09.29-0 |
ocid1.image.oc1.phx.aaaaaaaa2d243dmn6mj53zieyap5bdvtq7xfmr5kg5xulrldbjzdavaaoj6a |
| Oracle-Linux-6.9-2017.08.25-0 |
ocid1.image.oc1.phx.aaaaaaaavlwrtcgz2mx6c4q4qg4gwvibx6g7xqkowe3tbbwjnifybwmexpnq |
| Oracle-Linux-6.9-2017.07.17-0 |
ocid1.image.oc1.phx.aaaaaaaa3s4v5eamndtyghbo4bj2mhobkwjwbz3eowyy5cebmrsoxvoopixa |
| CentOS-7-2017.09.14-0 |
ocid1.image.oc1.phx.aaaaaaaauqtvzhqplzuyesb5tctig6qrwoavpnfiwdkvuynu7z646z72ahcq |
| CentOS-7-2017.07.17-0 |
ocid1.image.oc1.phx.aaaaaaaahmts5c5nktcnqsu6ppom72d7dnvkmqsoaavpsiklamn7qd3a7szq |
| CentOS-7-2017.04.18-0 |
ocid1.image.oc1.phx.aaaaaaaaamx6ta37uxltor6n5lxfgd5lkb3lwmoqurlpn2x4dz5ockekiuea |
| CentOS-6.9-2017.09.14-0 |
ocid1.image.oc1.phx.aaaaaaaagedr7qsbpxjylieetj7dy2r4xoq6p65v3i6y4simkhgyww2ibzxq |
| CentOS-6.9-2017.07.17-0 |
ocid1.image.oc1.phx.aaaaaaaalm3mr4lpsnzjev2nzmmkhpiy7yxu3456qyg7r4nvjieslp4yngtq |
| CentOS-6.8-2017.06.13-0 |
ocid1.image.oc1.phx.aaaaaaaauk5k4km4epm7fxj5ifuylvnyjfklmukqcg25clayx3ucuqizjbia |
| Canonical-Ubuntu-16.04-2017.08.22-0 |
ocid1.image.oc1.phx.aaaaaaaalzhdvphf77qgvqo2apmve7o4s4jo77rluaf456qdzrtwmkq2xhra |
| Canonical-Ubuntu-16.04-2017.06.28-0 |
ocid1.image.oc1.phx.aaaaaaaak2idogwetkehtdvo7m673ojuucpfxhybd3ehun7izzgjqi4c4gga |
| Canonical-Ubuntu-16.04-2017.05.16-0 |
ocid1.image.oc1.phx.aaaaaaaae3a3oedsmmwsqu4dsrzntekefgq7vosngn4r6u6n5mis7dwpxxpa |
| Canonical-Ubuntu-14.04-2017.08.22-0 |
ocid1.image.oc1.phx.aaaaaaaa2wjumduuoq6rqprrsmgu53eeyzp47vjztn355tkvsr3m2p57woqq |
| Canonical-Ubuntu-14.04-2017.07.10-0 |
ocid1.image.oc1.phx.aaaaaaaaelnit3ag2zy3u5664shbjqgl6c33g2i436wix6xb5tqcsa7klnoa |
+---------------------------------------------------------+---------------------------------------------
-------------------------------------+
For more information about the JMESPath query language for JSON, see JMESPath.
EXAMPLES
For example:
T=ocid1.tenancy.oc1..aaaaaaaaba3pv6wm2ytdrwrx32uzr4h25vkcr4jqae5f15
p2b2qstifsfdsq
C=ocid1.compartment.oc1..aaaaaaaarhifmvrvuqtye5q66rck6copzqck3ukc5f
ldrwpp2jojdcypxfga
To list users and limit the output, run the following command.
oci iam user list --compartment-id $T --limit 5
The CLI provides command options that support advanced input and output operations.
You can get the correct JSON format for command options and commands.
}
},
{
"icmpOptions": {
"code": 0,
"type": 0
},
"isStateless": true,
"protocol": "string",
"source": "string",
"tcpOptions": {
"destinationPortRange": {
"max": 0,
"min": 0
},
"sourcePortRange": {
"max": 0,
"min": 0
}
},
"udpOptions": {
"destinationPortRange": {
"max": 0,
"min": 0
},
"sourcePortRange": {
"max": 0,
"min": 0
}
}
}
]
The CLI supports combining arguments on the command line with file input. However, if the
same values are provided in a file and on the command line, the command line takes
precedence.
You can pass complex input from a file by referencing it from the command line. For Windows
users, this removes the requirement of having to escape JSON text. You provide a path to the
file using the file:// prefix.
PATH TYPES
l Relative paths from the same directory, for example: file://testfile.json and
file://relative/path/to/testfile.json
FILE LOCATIONS
The examples in this section use JSON that's generated for a command option and an entire
command. The JSON is saved in a file, edited, and then used as command line input.
This end-to-end example shows how to generate the JSON for a security list id option used to
create a subnet. The JSON is saved in a file, edited, and then used as command line input.
2. Create a file and add the following content, which was returned in step 1. This content
doesn't have to be escaped or on a single line, it just has to contain valid JSON.
[
"string",
"string"
]
3. Edit the file and replace the "string" values with values, as shown in the following
example.
[
"ocid1.securitylist.oc1.phx.aaaaaaaaw7c62ybv4676muq5tdrwup3v2maiquhbkbh4sf75tjcf5dm6kvlq",
"ocid1.securitylist.oc1.phx.aaaaaaaa7snx4jh5drwo2h33rwcdqev6elir55hnrhi2yfndjfons5rcqk4q"
]
This end-to-end example shows how to generate the JSON to create a virtual cloud network
(VCN). The JSON is saved in a file, edited, and then used as command line input.
2. Create a file and add the following content, which was returned in step 1. This content
doesn't have to be escaped or on a single line, it just has to contain valid JSON.
{
"cidrBlock": "string",
"compartmentId": "string",
"displayName": "string",
"dnsLabel": "string"
}
3. Edit the file and replace the "string" values with values, as shown in the following
example.
{
"cidrBlock": "10.0.0.0/16",
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaal3gzijdliedxxhqol2rggndrwyg35nz5zxil26astpgeyq7jnhwa",
"displayName": "TestVCN",
"dnsLabel": "testdns"
}
5. To create the VCN using "create-vcn.json" as input, run the following command.
oci network vcn create --from-json file://create-vcn.json
EXAMPLES
The following examples show how you can use the CLI to complete complex tasks in Oracle
Cloud Infrastructure.
You can use the CLI for several object operations with the Object Storage service.
Objects can be uploaded from a file or from the command line (STDIN), and can be
downloaded to a file or to the command line (STDOUT).
Upload an object:
oci os object put -ns mynamespace -bn mybucket --name myfile.txt --file /Users/me/myfile.txt --metadata
'{"key1":"value1","key2":"value2"}'
Download an object:
oci os object get -ns mynamespace -bn mybucket --name myfile.txt --file /Users/me/myfile.txt
l Downloading all objects, or all the objects that match a specified prefix, in a bucket
# Download all the objects.
oci os object bulk-download -ns mynamespace -bn mybucket --download-dir
path/to/download/directory
l Deleting all objects, or all the objects that match a specified prefix, in a bucket
# Delete all the objects.
oci os object bulk-delete -ns mynamespace -bn mybucket
To get more information about the commands for bulk operations, run the following help
commands:
# bulk-upload
oci os object bulk-upload -h
# bulk-download
oci os object bulk-download -h
# bulk-delete
oci os object bulk-delete -h
Multipart operations for Object Storage include object uploads and downloads.
Multipart Uploads
Large files can be uploaded to Object Storage in multiple parts to speed up the upload. By
default, files larger than 128 MiB are uploaded using multipart operations. You can override
this default by using the --no-multipart option.
You can configure the following options for the oci os object put command:
The following example shows the command for a multipart upload if the object is larger than
200 MiB.
oci os object put -ns my-namespace -bn my-bucket --file path/to/large/file --part-size 200
For more information about multipart uploads, see Managing Multipart Uploads.
Multipart Downloads
Large files can be downloaded from Object Storage in multiple parts to speed up the
download.
You can configure the following options for the oci os object get command:
The following example shows the command to download any object with a size greater than
500 MiB. The object is downloaded in 128 MiB parts.
oci os object get -ns my-namespace -bn my-bucket --name my-large-object --multipart-download-threshold
500 --part-size 128
If you installed the CLI manually, use one of the following commands to upgrade the CLI.
If you installed the CLI using the install script, use the following process to upgrade the CLI:
l Run the install script and specify the same install directory.
l When prompted, reply Y to overwrite the existing installation.
Service Errors
On Oracle Linux 7.3, if you encounter permission issues when running pip install, you
might need to use sudo.
If the oci command isn't found, this can be caused by one of the following reasons:
l pip installed the package to a different virtual environment than your active one.
l You switched to a different active virtual environment after you installed the CLI.
To determine where the CLI is installed, run the which pip and which oci commands.
If the wheel file won't install, verify that pip is up to date. To update pip, run the pip install
-U pip command. Try to install the wheel again.
Windows Issues
If the oci command isn't found, make sure that the oci.exe location is in your path (for
example, the Scripts directory in your Python installation).
Contact Information
If you want to contribute ideas, report a bug, get notified about updates, have questions, or
want to give feedback, use one of the following links.
CONTRIBUTIONS
Got a fix for a bug, or a new feature you'd like to contribute? The CLI is open source and
accepting pull requests on GitHub.
NOTIFICATIONS
To be notified when a new version of the CLI is released, subscribe to the Atom feed.
QUESTIONS OR FEEDBACK
l Services supported:
o Object Storage
o Archive Storage
o Data Transfer Service
l Licensing: The Data Transfer Utility is licensed under the Universal Permissive License
1.0 and the Apache License 2.0. Third-party content is separately licensed as described
in the code.
l Downloading:
o Download the .deb file to install on Debian or Ubuntu
o Download the .rpm file to install on Oracle Linux or Red Hat Linux
For details about performing specific data transfer tasks using the Data Transfer Utility, see
Managing HDD Data Transfers.
Prerequisites
To install and use the Data Transfer Utility, you must:
Transfer Utility to use your environment's network proxies by setting the standard Linux
environment variables on your host machine:
export https_proxy=http://www-proxy.example.com:80
Where X.Y.Z are the version numbers that match the installer you downloaded.
3. Confirm that the Data Transfer Utility installed successfully.
sudo dts --help
To install the Data Transfer Utility on Oracle Linux or Red Hat Linux
1. If you have not done so already, download the .rpm file.
2. Issue the yum install command as the root user.
sudo yum localinstall ./dts-X.Y.Z.x86_64.rpm
Where X.Y.Z are the version numbers that match the installer you downloaded.
3. Confirm that the Data Transfer Utility installed successfully.
sudo dts --help
The data transfer administrator /root/.oci/config configuration file requires the following
structure:
[DEFAULT]
user=<The OCID for the data transfer administrator>
fingerprint=<The fingerprint of the above user's public key>
key_file=<The _absolute_ path to the above user's private key file on the host machine>
tenancy=<The OCID for the tenancy that owns the data transfer job and bucket>
region=<The region where the transfer job and bucket should exist. Valid values are: us-phoenix-1 and
us-ashburn-1>
For example:
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuexxxxxx
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=~/.oci/ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuefievzzq.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaa6cdze5nyyxtk3exdy4ynrk3uvin3ewi6iu7pswba7nh4dx6qrzcq
region=us-phoenix-1
For the data transfer administrator, you can create a single configuration file that contains
different profile sections with the credentials for multiple users, and then use the --profile
option to specify which profile to use in the command. Here is an example of a data transfer
administrator configuration file with different profile sections:
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuexxxxxx
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=~/.oci/ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuefievzzq.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaa6cdze5nyyxtk3exdy4ynrk3uvin3ewi6iu7pswba7nh4dx6qrzcq
region=us-phoenix-1
[PROFILE1]
user=ocid1.user.oc1..bbbbbbbzravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxueyyyyyyy
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=~/.oci/ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuefievzzq.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaa6cdze5nyyxtk3exdy4ynrk3uvin3ewi6iu7pswba7nh4dx6qrzcq
region=us-ashburn-1
For example:
[DEFAULT]
user=ocid1.user.oc1..ccccccccczravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuezzzzzz
fingerprint=4c:1a:6f:a1:5b:9e:58:45:f7:53:43:1f:51:0f:d8:45
key_file=~/.oci/ocid1.user.oc1..aaaaaaaazravan2fktvutqp3bpmzlfl7nlsdtvzexnaxiuz7mbxuefievzzq.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaa6cdze5nyyxtk3exdy4ynrk3uvin3ewi6iu7pswba7nh4dx6qrzcq
region=us-phoenix-1
The following table lists the basic entries that are required for each configuration file and
where to get the information for each entry.
fingerprint Fingerprint for the key pair being used. To get Yes
the value, see Required Keys and OCIDs.
You can verify the data transfer upload user credentials using the following command:
dts job verify-upload-user-credentials --bucket <bucket_name>
Syntax
The following command to create a transfer job shows a typical Data Transfer Utility
construct.
dts job create --compartment-id
ocid1.compartment.oc1..aaaaaaaamnk2ilreg5fkgu7rarfbbhdv3a5ji4eixxgkl4uprbqk6aefv5sq --bucket mybucket --
display-name "mycompany transfer1"
You can get help using --help, -h, or -?. For example:
dts --help
To get the installed version of the Data Transfer Utility, run the following command:
dts --version
What's Next
You are now ready to perform data transfer-related tasks. See Managing HDD Data Transfers.
l Java SDK
l Python SDK
l Ruby SDK
l Chef Knife Plugin
l Hadoop Distributed File System (HDFS) for Object Storage
l Terraform Provider
Java SDK
This topic describes how to install, configure, and use the Oracle Cloud Infrastructure Java
SDK.
l Services supported:
o Audit
o Core Services (Networking, Compute, Block Volume)
o Database
o IAM
o Load Balancing
o Object Storage
l Licensing: This SDK and sample is dual licensed under the Universal Permissive
License 1.0 and the Apache License 2.0; third-party content is separately licensed as
described in the code.
l Download: GitHub
l API reference documentation: Java SDK API Reference
Requirements
JAVA 7 COMPATIBILITY
To use Java 7, you must have a version that supports TLS 1.2.
l https://blogs.oracle.com/java-platform-group/entry/java_8_will_use_tls
l http://bugs.java.com/view_bug.do?bug_id=6916074
l https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https
The Java Virtual Machine (JVM) caches DNS responses from lookups for a set amount of time,
called time-to-live (TTL). This ensures faster response time in code that requires frequent
name resolution.
The JVM uses the networkaddress.cache.ttl property to specify the caching policy for DNS
name lookups. The value is an integer that represents the number of seconds to cache the
successful lookup. The default value for many JVMs, -1, indicates that the lookup should be
cached forever.
Because resources in Oracle Cloud Infrastructure use DNS names that can change, we
recommend that you change the the TTL value to 60 seconds. This ensures that the new IP
address for the resource is returned on next DNS query. You can change this value globally or
specifically for your application:
l To set TTL globally for all applications using the JVM, add the following in the $JAVA_
HOME/jre/lib/security/java.security file:
networkaddress.cache.ttl=60
l To set TTL only for your application, set the following in your application's initialization
code:
java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
You can download the Java SDK as a zip archive from GitHub. It contains the SDK, all of its
dependencies, documentation, and examples. For best compatibility and to avoid issues, use
the version of the dependencies included in the archive. Some notable issues are:
l Bouncy Castle: The SDK bundles 1.52, which is required if you use encrypted PEM keys
for authentication. If you don't use encrypted PEM keys, you can use a newer version.
l Jax-RS API: The SDK bundles 2.0.1 of the spec. Older versions will cause issues.
l Jersey Core and Client: The SDK bundles 2.24.1, which is required to support large
object uploads to Object Storage. Older versions will not support uploads greater than
~2.1 GB.
The SDK services need two types of configuration: credentials and client-side HTTP settings.
CONFIGURING CREDENTIALS
First, you need to set up your credentials and config file. For instructions, see SDK and Tool
Configuration.
Next you need to set up the client to use the credentials. The credentials are abstracted
through an AuthenticationDetailsProvider interface. Clients can implement this however
you choose. We have included a simple POJO/builder class to help with this task
(SimpleAuthenticationDetailsProvider).
l The private key supplier can be created with the file path directly, or using the config
file:
Supplier<InputStream> privateKeySupplier
= new SimplePrivateKeySupplier("~/.oci/oci_api_key.pem");
Supplier<InputStream> privateKeySupplierFromConfigEntry
= new SimplePrivateKeySupplier(config.get("key_file"));
.tenantId("myTenantId")
.userId("myUserId")
.fingerprint("myFingerprint")
.privateKeySupplier(privateKeySupplier)
.build();
l Finally, if you use standard config file keys and the standard config file location, you can
simplify this further by using ConfigFileAuthenticationDetailsProvider:
AuthenticationDetailsProvider provider
= new ConfigFileAuthenticationDetailsProvider("ADMIN_USER");
After you have both a credential configuration and the optional client configuration, you can
start creating service instances.
In the config file, you can insert custom key-value pairs that you define, and then reference
them as necessary. For example, you could specify a frequently used compartment ID in the
config file like so (highlighted in red italics):
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcmdy5eqbb6qt2jvpkanghtgdaqedqw3rynjq
fingerprint=20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
key_file=~/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2bcmdyt2j6rx32uzr4h25vqstifsfdsq
custom_compartment_
id=ocid1.compartment.oc1..aaaaaaaayzfqeibduyox6iib3olcmdar3ugly4fmameq4h7lcdlihrvur7xq
Raw Requests
Raw requests are useful, and in some cases necessary. Typical use cases are: when using
your own HTTP client, making a OCI-authenticated request to an alternate endpoint, and
making a request to a OCI API that is not currently supported in the SDK. The Java SDK
exposes the DefaultRequestSigner class that you can use to create a RequestSigner
instance for non-standard requests.
l Call setEndpoint() on the service instance. This lets you specify a full host name (for
example, https://www.example.com).
l Call setRegion() on the service instance. This selects the appropriate host name for
the service for the given region. However, if the service is not supported in the region
you set, the Java SDK returns an error.
Note that a service instance cannot be used to communicate with different regions. If you
need to make requests to different regions, create multiple service instances.
Forward Compatibility
Some response fields are enum-typed. In the future, individual services may return values
not covered by existing enums for that field. To address this possibility, every enum-type
response field has an additional value named "UnknownEnumValue". If a service returns a
value that is not recognized by your version of the SDK, then the response field will be set to
this value. Please ensure that your code handles the "UnknownEnumValue" case if you have
conditional logic based on an enum-typed field.
To make synchronous calls, create an instance of the synchronous client. The general pattern
for synchronous clients is that for a service named Foo, there will be an interface named
FooService, and the synchronous client implementation will be called FooServiceClient.
Here's an example of creating an Object Storage client:
AuthenticationDetailsProvider provider = ...;
ObjectStorage clientWithDefaultClientConfig = new ObjectStorageClient(provider);
clientWithDefaultClientConfig.setRegion(Region.US_ASHBURN_1);
Synchronous calls will block until the response is available. All SDK APIs return a response
object (regardless of whether or not the API sends any content back). The response object
typically contains at least a request ID that you can use when contacting Oracle support for
help on a particular request.
ObjectStorage client = ...;
GetBucketResponse response = client.getBucket(
GetBucketRequest.builder().namespaceName("myNamespace").bucketName("myBucket").build());
String requestId = response.getOpcRequestId();
Bucket bucket = response.getBucket();
System.out.println(requestId);
System.out.println(bucket.getName());
To make asynchronous calls, create an instance of the asynchronous client. The general
pattern for asynchronous clients is that for a service named Foo, there will be an interface
named FooServiceAsync, and the asynchronous client implementation will be called
FooServiceAsyncClient. Here's an example of creating an Object Storage client:
AuthenticationDetailsProvider provider = ...;
ObjectStorageAsync clientWithDefaultClientConfig = new ObjectStorageAsyncClient(provider);
clientWithDefaultClientConfig.setRegion(Region.US_ASHBURN_1);
Asynchronous calls will return immediately. You need to provide an AsyncHandler that will be
invoked after the call completes either successfully or unsuccessfully:
ObjectStorageAsync client = ...;
@Override
public void onError(GetBucketRequest request, Throwable error) {
error.printStackTrace();
}
};
Paginated Responses
Some APIs return paginated result sets. The Response objects will contain a method to fetch
the next page token. If the token is null, there are no more items. If it is not null, you can
make an additional request (setting the token on the Request object) to get the next page of
responses. Note, some APIs may return a token even if there are no more results, so it's
important to also check whether any items were returned and stop if there are none. Here's
an example in the Object Storage API:
ObjectStorage client = ...;
ListBucketsRequest.Builder builder =
ListBucketsRequest.builder().namespaceName(namespace);
String nextPageToken = null;
do {
builder.page(nextPageToken);
ListBucketsResponse listResponse = client.listBuckets(builder.build());
List<Bucket> buckets = listResponse.getItems();
// handle buckets
nextPageToken = listResponse.getOpcNextPage();
} while (nextPageToken != null);
Exception Handling
Exceptions are runtime exceptions (unchecked), so they do not show up in method signatures.
All APIs can throw a BmcException. The exception will contain information about the
underlying HTTP request (i.e., status code or timeout). BmcException also contains a
getOpcRequestId method that you can use to obtain the request ID to provide when
contacting support.
ObjectStorage client = ...;
try {
GetBucketResponse response = client.getBucket(
GetBucketRequest.builder().namespaceName("myNamespace").bucketName("myBucket").build());
String requestId = response.getOpcRequestId();
System.out.println(requestId);
} catch (BmcException e) {
String requestId = e.getOpcRequestId();
System.out.println(requestId);
e.printStackTrace();
}
The SDK offers waiters that allow your code to wait until a specific resource reaches a desired
state. A waiter can be invoked in both a blocking or a non-blocking (with asychronous
callback) manner, and will wait until either the desired state is reached or a timeout is
exceeded. Waiters abstract the polling logic you would otherwise have to write into an easy-
to-use single method call.
return response.getInstance();
}
l The other version gives you full control over how long to wait and how much time
between polling attempts. For example:
waiters.forInstance(GetInstanceRequest, LifecycleState, TerminationStrategy, DelayStrategy)
Logging
Logging in the SDK is done through SLF4J. SLF4J is a logging abstraction that allows the use of
a user-supplied logging library (e.g., log4j). For more information, see the SLF4J manual.
The following is an example that enables basic logging to standard out. More advanced logging
options can be configured by using the log4j binding.
The Object Storage service supports multipart uploads to make large object uploads easier by
splitting the large object into parts. The Java SDK supports raw multipart upload operations
for advanced use cases, as well as a higher level upload class that uses the multipart upload
APIs. Managing Multipart Uploads provides links to the APIs used for multipart upload
operations. Higher level multipart uploads are implemented using the UploadManager, which
will: split a large object into parts for you, upload the parts in parallel, and then recombine
and commit the parts as a single object in storage.
The UploadObject example shows how to use the UploadManager to automatically split an
object into parts for upload to simplify interaction with the Object Storage service.
Examples
The examples are also in the downloadable .zip file for the SDK. Examples for older versions
of the SDK are in the downloadable .zip for the specific version, available on GitHub.
If you'd like to see another example not already covered, file a GitHub issue.
1. Download the SDK to a directory named oci. See GitHub for the download.
2. Unzip the SDK into the oci directory. For example: tar -xf oci-java-sdk-full.zip
3. Create your configuration file in your home directory (~/.oci/config). See Configuring
the SDK.
4. Use javac to compile one of the previous example classes from the examples directory,
ex:
javac -cp lib/oci-java-sdk-full-<version>.jar:third-party/lib/*
examples/ObjectStorageSyncExample.java
5. You should now have a class file in the examples directory. Run the example:
java -cp examples:lib/oci-java-sdk-full-<version>.jar:third-party/lib/* ObjectStorageSyncExample
Troubleshooting
OBJECT S TORAGE CLIENT DOES NOT CLOSE CONNECTIONS WHEN CLIENT IS CLOSED.
Too many file descriptors are opened up, and it takes too long to close existing ones. An
exception may look like this:
Caused by: java.io.FileNotFoundException: classes/caspertest.pem (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
By default, the Java SDK can only handle keys of 128 bit or lower key length. Users get
"Invalid Key Exception" and "Illegal key size" errors when they use longer keys, such as
AES256.
Contributions
Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and
accepting pull requests on GitHub.
Notifications
To be notified when a new version of the Java SDK is released, subscribe to the Atom feed.
Questions or Feedback
Python SDK
General information about the Python SDK:
l Services supported:
o Audit
o Core Services (Networking, Compute, Block Volume)
o Database
o IAM
o Load Balancing
o Object Storage
l Licensing: This SDK and sample is dual licensed under the Universal Permissive
License 1.0 and the Apache License 2.0; third-party content is separately licensed as
described in the code.
l Download: Download the SDK from GitHub or get the package from the Python
Package Index (PyPi).
l Documentation: Python SDK documentation.
Ruby SDK
Version 2.0.6
The HDFS connector lets your Apache Hadoop application read and write data to and from the
Oracle Cloud Infrastructure Object Storage service.
If you use an encrypted PEM file for credentials, the passphrase will be read from
configuration using the getPassword Hadoop Configuration method. The getPassword option
checks for a password in a registered security provider. If the security provider doesn't
contain the requested key, it will fallback to reading the plaintext passphrase directly from the
configuration file.
JAVA 7 COMPATIBILITY
To use Java 7, you must have a version that supports TLS 1.2.
l https://blogs.oracle.com/java-platform-group/entry/java_8_will_use_tls
l http://bugs.java.com/view_bug.do?bug_id=6916074
l https://blogs.oracle.com/java-platform-group/entry/diagnosing_tls_ssl_and_https
The Java Virtual Machine (JVM) caches DNS responses from lookups for a set amount of time,
called time-to-live (TTL). This ensures faster response time in code that requires frequent
name resolution.
The JVM uses the networkaddress.cache.ttl property to specify the caching policy for DNS
name lookups. The value is an integer that represents the number of seconds to cache the
successful lookup. The default value for many JVMs, -1, indicates that the lookup should be
cached forever.
Because resources in Oracle Cloud Infrastructure use DNS names that can change, we
recommend that you change the the TTL value to 60 seconds. This ensures that the new IP
address for the resource is returned on next DNS query. You can change this value globally or
specifically for your application:
l To set TTL globally for all applications using the JVM, add the following in the $JAVA_
HOME/jre/lib/security/java.security file:
networkaddress.cache.ttl=60
l To set TTL only for your application, set the following in your application's initialization
code:
java.security.Security.setProperty("networkaddress.cache.ttl" , "60");
You can set the following HDFS Connector properties in the core-site.xml file. The
BmcProperties page lists additional properties that you can configure for a connection to OCI
Object Storage.
fs.oci.client.hostname
fs.oci.client.auth.tenantId
l Description: OCID of your tenancy. To get the value, see Required Keys and OCIDs.
l Type: String
l Required: Yes, unless you provide a customer authenticator
fs.oci.client.auth.userId
l Description: OCID of the user calling the API. To get the value, see Required Keys and
OCIDs.
l Type: String
l Required: Yes, unless you provide a custom authenticator
fs.oci.client.auth.fingerprint
l Description: Fingerprint for the key pair being used. To get the value, see Required
Keys and OCIDs.
l Type: String
l Required: Yes, unless you provide a custom authenticator
fs.oci.client.auth.pemfilepath
l Description: Full path and filename of the private key used for authentication. The file
should be on the local filesystem.
l Type: String
l Required: Yes, unless you provide a custom authenticator
fs.oci.client.auth.passphrase
This example shows how properties can be configured in a core-site.xml file (the OCIDs are
shortened for brevity):
<configuration>
...
<property>
<name>fs.oci.client.hostname</name>
<value>https://objectstorage.us-ashburn-1.oraclecloud.com</value>
</property>
<property>
<name>fs.oci.client.auth.tenantId</name>
<value>ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4j...stifsfdsq</value>
</property>
<property>
<name>fs.oci.client.auth.userId</name>
<value>ocid1.user.oc1..aaaaaaaat5nvwcnazjc...aqw3rynjq</value>
</property>
<property>
<name>fs.oci.client.auth.fingerprint</name>
<value>20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34</value>
</property>
<property>
<name>fs.oci.client.auth.pemfilepath</name>
<value>~/.oci/oci_api_key.pem</value>
</property>
...
</configuration>
Large objects are uploaded to Object Storage using multipart uploads. The file is split into
smaller parts that are uploaded in parallel, which reduces upload times. This also enables the
HDFS connector to retry uploads of failed parts instead of failing the entire upload. However,
uploads may transiently fail, and the connector will attempt to abort partially uploaded files.
Because these files accumulate (and you will be charged for storage), list the uploads
periodically and then after a certain number of days abort them manually using the Java SDK.
Information about using the Object Storage API for managing multipart uploads can be found
in Using the API.
Note: If you prefer not to use multipart uploads at all, you can disable them using the
Hadoop configuration ‘fs.oci.client.multipart.allowed’, and setting it to ‘false’.
Best Practices
The following sections contain best practices to optimize usage and performance.
DIRECTORY NAMES
There are no actual directories in Object Storage. Directory grouping is a function of naming
convention, where objects use / delimiters in their names. For example, an object named
a/foo.json implies there is a directory named a. However, if that object is deleted, the a
directory is also deleted implicitly. To preserve filesystem semantics where the directory can
exist without the presence of any files, the HDFS connector creates an actual object whose
name ends in / with a path that represents the directory, (e.g., create an object named a/).
Now, deleting a/foo.json doesn't affect the existence of the a directory, because the a/
object maintains its presence. However, it's entirely possible that somebody could delete that
a/ object without deleting the files/directories beneath it. The HDFS connector will only delete
the folder object if there are no objects beneath that path. The folder object itself is zero
bytes.
I NCONSISTENT FILESYSTEM
Deleting a directory means deleting all objects that start with the prefix representing that
directory. HDFS allows you to query for the file status of a file or a directory. The file status of
a directory is implemented by verifying that the folder object for that directory exists.
However, it's possible that the folder object has been deleted, but some of the objects with
that prefix still exist. For example, in a situation with these objects:
l a/b/foo.json
l a/b/bar.json
l a/b/
HDFS would know that directory /a/b/ exists and is a directory, and scanning it would result
in foo.json and bar.json. However, if object a/b/ was deleted, the filesystem would appear
to be in an inconsistent state. You could query it for all files in directory /a/b/ and find the
two entries, but querying for the status of the actual /a/b/ directory would result in an
exception because the directory doesn't exist. The HDFS connector does not attempt to fix up
the state of the filesystem.
FILE CREATION
Object Storage supports objects that can be many gigabytes in size. Creating files will
normally be done by writing to a temp file and then uploading the contents of the file when the
stream is closed. The temp space must be large enough to handle multiple uploads. The temp
directory used is controlled by the hadoop.tmp.dir configuration property.
DIRECTORY LISTING
Listing a directory is essentially a List bucket operation with a prefix and delimiter specified.
To create an HDFS FileStatus instance for each key, the connector performs an additional
HEAD request to get ObjectMetadata for each individual key. This will be required until Object
Storage supports richer list operation data.
HDFS filesystems and files are referenced through URIs. The scheme specifies the type of
filesystem, and the remaining part of the URI is largely free for the filesystem
implementation to interpret as it wants.
Because Object Storage is an object store, its ability to name objects as if they were files in a
filesystem is used to mimic an actual filesystem.
ROOT
The root of Object Storage filesystem is denoted by a path where the authority component
includes the bucket and the namespace:
oci://bucket@namespace/
This is always the root of the filesystem. The reason for using authority for both bucket and
namespace is that HDFS only allows the authority portion to determine where the filesystem
is; the path portion denotes just the path to the resource (so "oci//namespace/bucket" won't
work, for example). Note that the @ character is not a valid character for buckets or
namespaces, and should allow the authority to be parsed correctly.
S UB-DIRECTORIES
Sub-directories do not actually exist, but can be mimicked by creating objects with /
characters. For example, two files named a/b/c/foo.json and a/b/d/bar.json would
appear as if they were in a common directory a/b. This would be achieved by using the Object
Storage prefix- and delimiter-based querying. In the given example, referencing a sub-
directory as a URI would be:
oci://bucket@namespace/a/b/
OBJECTS/FILES
Logging
Logging in the connector is done through SLF4J. SLF4J is a logging abstraction that allows the
use of a user-supplied logging library (e.g., log4j). For more information, see, the SLF4J
manual.
The following example shows how to enable basic logging to standard output.
You can configure more advanced logging options by using the log4j binding.
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import org.apache.commons.io.IOUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.oracle.oci.hdfs.BmcFilesystem;
import lombok.RequiredArgsConstructor;
@RequiredArgsConstructor
public class SampleOracleBmcHadoopJob
{
private static final String SAMPLE_JOB_PATH = "/samplehadoopjob";
private static final String INPUT_FILE = SAMPLE_JOB_PATH + "/input.dat";
private static final String OUTPUT_DIR = SAMPLE_JOB_PATH + "/output";
// non-static since this is the runner class it needs to initialize after we set the properties
private final Logger log = LoggerFactory.getLogger(SampleOracleBmcHadoopJob.class);
/**
* Runner for sample hadoop job. This expects 3 args: path to configuration file, Object Store
namespace, Object
* Store bucket. To run this, you must:
*{@code
*
Create a standard hadoop configuration file
*
Create the bucket ahead of time.
*}
* This runner will create a test input file in a file '/samplehadoopjob/input.dat', and job results
will be written
* to '/samplehadoopjob/output'.
*
* @param args
* 1) path to configuration file, 2) namespace, 3) bucket
* @throws Exception
*/
public static void main(final String[] args) throws Exception
{
if (args.length != 3)
{
throw new IllegalArgumentException(
"Must have 3 args: 1) path to config file, 2) object storage namespace, 3) object
storage bucket");
}
log.info("Creating job");
final Job job = this.createJob(configuration);
log.info("Executing job...");
final int response = job.waitForCompletion(true) ? 0 : 1;
private void setup(final String uri, final Configuration configuration) throws IOException,
URISyntaxException
{
try (final BmcFilesystem fs = new BmcFilesystem())
{
fs.initialize(new URI(uri), configuration);
fs.delete(new Path(SAMPLE_JOB_PATH), true);
final FSDataOutputStream output = fs.create(new Path(INPUT_FILE));
output.writeChars("foo\nbar\ngak\ntest\nfoo\ngak\n\ngak");
output.close();
}
}
package com.oracle.oci.hadoop.example;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
@Override
public void map(final Object key, final Text value, final Context context) throws IOException,
InterruptedException
{
final StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens())
{
this.word.set(itr.nextToken());
context.write(this.word, one);
}
}
}
package com.oracle.oci.hadoop.example;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
@Override
public void reduce(final Text key, final Iterable values, final Context context)
throws IOException, InterruptedException
{
int sum = 0;
for (final IntWritable val : values)
{
sum += val.get();
}
this.result.set(sum);
context.write(key, this.result);
}
}
Troubleshooting
The HDFS Connector can only handle keys of 128 bit or lower key length. Users get "Invalid
Key Exception" and "Illegal key size" errors when they use longer keys, such as AES256. Use
one of the following workarounds to fix this issue:
Questions or Feedback
l Licensing: This provider and sample is licensed under the Mozilla Public License 2.0;
third-party content is separately licensed as described in the code.
l Download: GitHub
l Documentation: README
Contributions
Got a fix for a bug, or a new feature you'd like to contribute? The Chef Knife Plugin for Oracle
Cloud Infrastructure is open source and accepting pull requests on GitHub.
Notifications
To be notified when a new version of the Chef Knife Plugin for Oracle Cloud Infrastructure is
released, subscribe to the Atom feed.
Questions or Feedback
Terraform Provider
This topic provides information about installing, configuring, and using the Terraform provider
with the Terraform orchestration tool.
l Services supported:
o Core Services (Networking, Compute, Block Volume)
o Database
o IAM
o Load Balancing
o Object Storage
l Licensing: This provider and sample is licensed under the Mozilla Public License 2.0;
third-party content is separately licensed as described in the code.
l Download: GitHub
l Documentation: README
Requirements
Contributions
Got a fix for a bug, or a new feature you'd like to contribute? The Terraform provider is open
source and accepting pull requests on GitHub.
Notifications
To be notified when a new version of the Terraform provider is released, subscribe to the
Atom feed.
Questions or Feedback
l using a configuration file (required for the CLI, optional for SDKs)
l declaring a configuration at runtime (supported for all SDK configuration fields, limited
to the region field for the CLI)
l using a configuration file and declaring a configuration at runtime
Refer to the documentation for each SDK for information about the config object and any
exceptions when using a configuration file. For example, the Java SDK does not pick up the
region value in a configuration file.
Windows Users
Because Windows File Explorer doesn't support folder names that start with a period you have
to use PowerShell to create the folder. Open the PowerShell console and type: mkdir ~/.oci.
The following table lists the basic entries that are required for the config file, as well as where
to get the information for each entry.
user OCID of the user calling the API. To get the value, see Yes
Required Keys and OCIDs.
Example:
ocid1.user.oc1..aaaaaaaa65vwl75tewwm32rgqvm6i34unq
(shortened for brevity)
fingerprint Fingerprint for the key pair being used. To get the value, see Yes
Required Keys and OCIDs.
Example:
20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
Example: ~/.oci/oci_api_key.pem
tenancy OCID of your tenancy. To get the value, see Required Keys and Yes
OCIDs.
Example:
ocid1.tenancy.oc1..aaaaaaaaba3pv6wuzr4h25vqstifsfdsq
(shortened for brevity)
Example: us-ashburn-1
The following example shows key values in a config file. This example has a DEFAULT profile
and an additional profile, ADMIN_USER. Any value that isn't explicitly defined for the ADMIN_
USER profile (or any other profiles you add to the config file) is inherited from the DEFAULT
profile.
[DEFAULT]
user=ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq
fingerprint=20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34
key_file=~/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq
region=us-ashburn-1
[ADMIN_USER]
# Subsequent profiles inherit unspecified values from DEFAULT.
user=ocid1.user.oc1..aaaaaaaa65vwl7zut55hiavppn4nbfwyccuecuch5tewwm32rgqvm6i34unq
fingerprint=72:00:22:7f:d3:8b:47:a4:58:05:b8:95:84:31:dd:0e
key_file=keys/admin_key.pem
pass_phrase=mysecretphrase
1. Create a user in IAM for the person or system who will be calling the API, and put that
user in at least one IAM group with any desired permissions. See "Adding Users" in the
Oracle Cloud Infrastructure Getting Started Guide. You can skip this if the user exists
already.
2. Get these items:
l RSA key pair in PEM format (minimum 2048 bits). See How to Generate an API
Signing Key.
l Fingerprint of the public key. See How to Get the Key's Fingerprint.
l Tenancy's OCID and user's OCID. See Where to Get the Tenancy's OCID and
User's OCID.
3. Upload the public key from the key pair in the Console. See How to Upload the Public
Key.
4. If you're using one of the Oracle SDKs or tools, supply the required credentials listed
above in either a configuration file or a config object in the code. See SDK and Tool
Configuration. If you're instead building your own client, see Request Signatures.
Important: This key pair is not the SSH key that you use to
access compute instances. See Security Credentials.
Both the private key and public key must be in PEM format
(not SSH-RSA format). The public key in PEM format looks
something like this:
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQE...
...
-----END PUBLIC KEY-----
Note: For Windows, you may need to insert -passout stdin to be prompted for
a passphrase. The prompt will just be the blinking cursor, with no text.
openssl genrsa -out ~/.oci/oci_api_key.pem -aes128 -passout stdin 2048
3. Ensure that only you can read the private key file:
chmod go-rwx ~/.oci/oci_api_key.pem
Note: For Windows, if you generated the private key with a passphrase, you may need
to insert -passin stdin to be prompted for the passphrase. The prompt will just be the
blinking cursor, with no text.
openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem -passin stdin
5. Copy the contents of the public key to the clipboard using pbcopy, xclip or a similar tool
(you'll need to paste the value into the Console later). For example:
cat ~/.oci/oci_api_key_public.pem | pbcopy
Your API requests will be signed with your private key, and Oracle will use the public key to
verify the authenticity of the request. You must upload the public key to IAM (instructions
below).
When you upload the public key in the Console, the fingerprint is also automatically displayed
there. It looks something like this: 12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef
Notice that after you've uploaded your first public key, you can also use the UploadApiKey API
operation to upload additional keys. You can have up to three API key pairs per user. In an
API request, you specify the key's fingerprint to indicate which key you're using to sign the
request.
Endpoints
Oracle Cloud Infrastructure has these APIs and corresponding regional endpoints:
l Audit API:
o https://audit.us-ashburn-1.oraclecloud.com
o https://audit.us-phoenix-1.oraclecloud.com
o https://audit.eu-frankfurt-1.oraclecloud.com
l Core Services API:
(covering Networking, Compute, and Block Volume)
o https://iaas.us-ashburn-1.oraclecloud.com
o https://iaas.us-phoenix-1.oraclecloud.com
o https://iaas.eu-frankfurt-1.oraclecloud.com
l Database API:
o https://database.us-ashburn-1.oraclecloud.com
o https://database.us-phoenix-1.oraclecloud.com
o https://database.eu-frankfurt-1.oraclecloud.com
l DNS Zone Management API:
o https://dns.us-ashburn-1.oraclecloud.com
o https://dns.us-phoenix-1.oraclecloud.com
o https://dns.eu-frankfurt-1.oraclecloud.com
l File Storage API:
o https://filestorage.us-ashburn-1.oraclecloud.com
o https://filestorage.us-phoenix-1.oraclecloud.com
o https://filestorage.us-frankfurt-1.oraclecloud.com
l IAM API:
o https://identity.us-ashburn-1.oraclecloud.com
o https://identity.us-phoenix-1.oraclecloud.com
o https://identity.eu-frankfurt-1.oraclecloud.com
API Version
The base path of the endpoint includes the desired API version (for example, 20160918).
Here's an example for a POST request to create a new VCN in the Ashburn region:
POST https://iaas.us-ashburn-1.oraclecloud.com/20160918/vcns
For example:
curl -s --head https://iaas.us-phoenix-1.oraclecloud.com | grep Date
Many of the API operations require JSON in the request body or return JSON in the response
body. The specific contents of the JSON are described in the API documentation for the
individual operation. Notice that the JSON is not wrapped or labeled according to the
operation's name or the object's name or type.
{
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaauwjnv47knr7uuuvqar5bshnspi6xoxsfebh3vy72fi4swgrkvuvq",
"displayName": "Apex Virtual Cloud Network",
"cidrBlock": "172.16.0.0/16"
}
{
"id": "ocid1.vcn.oc1.phx.aaaaaaaa4ex5pqjtkjhdb4h4gcnko7vx5uto5puj5noa5awznsqpwjt3pqyq",
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaauwjnv47knr7uuuvqar5bshnspi6xoxsfebh3vy72fi4swgrkvuvq",
"displayName": "Apex Virtual Cloud Network",
"cidrBlock": "172.16.0.0/16"
"defaultRouteTableId":
"ocid1.routetable.oc1.phx.aaaaaaaaba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq",
"defaultSecurityListId":
"ocid1.securitylist.oc1.phx.aaaaaaaac6h4ckr3ncbxmvwinfvzxjbr7owu5hfzbvtu33kfe7hgscs5fjaq"
"defaultDhcpOptionsId":
"ocid1.dhcpoptions.oc1.phx.aaaaaaaawglzn7s5sogyfznl25a4vxgu76c2hrgvzcd3psn6vcx33lzmu2xa"
"state": "PROVISIONING",
"timeCreated": "2016-07-22T17:43:01.389+0000"
}
Error Format
If a request results in an error, the response contains a standard HTTP response code with 4xx
for client errors and 5xx for server errors. The body also includes JSON with an error code
and a description of the error. For example:
{
"code": "InvalidParameter",
"message": "Description may not be empty; description size must be between 1 and 400"
}
Request Throttling
Oracle Cloud Infrastructure applies throttling to many API requests to prevent accidental or
abusive use of resources. If you make too many requests too quickly, you might see some
succeed and others fail. Oracle recommends that you implement an exponential back-off,
starting from a few seconds to a maximum of 60 seconds. When a request fails due to
throttling, the system returns response code 429 and the following error code and description:
{
"code": "TooManyRequests",
"message": "User-rate limit exceeded."
}
You can poll a resource to determine its state. For example, when you call GetInstance, the
response body contains an instance resource that includes the lifecycleState attribute. You
might want your code to wait until the instance's lifecycleState is RUNNING before
proceeding.
Different resources take different amounts of time to transition between states. Therefore,
the optimal frequency and duration parameters for a polling strategy can vary among
resources. The Oracle Cloud Infrastructure SDK waiters use the following default strategy:
You can find your tenancy's OCID in the Console, in the footer at the bottom of the page. The
tenancy OCID looks something like this (notice the word "tenancy" in it):
ocid1.
tenancy.oc1..aaaaaaaauwjnv47knr7uuuvqar5bshnspi6xoxsfebh3vy72fi4swgrkvuvq.
List Pagination
When you call a "List" operation (for example, ListInstances), you can paginate the results
and limit the response to only one page of results. To do this, in the GET request, set the
limit to the number of items you want returned in the response. The opc-next-page header
will then appear in the response if there could be items left to get. Include the header's value
as the page parameter in the subsequent GET request. Repeat this process until you page
forward through the entire list.
Retry Token
For some operations you can provide a unique retry token (opc-retry-token) so the request
can be retried in case of a timeout or server error without the risk of executing that same
action again. The token expires after 24 hours, but can be invalidated before then due to
conflicting operations (for example, if a resource has been deleted and purged from the
system, then a retry of the original creation request may be rejected).
Request Signatures
This topic describes how to sign Oracle Cloud Infrastructure API requests.
l Bash
l C#
l Java
l NodeJS
l Perl
l PHP
l Python
l Ruby
Signature Version 1
The signature described here is version 1 of the Oracle Cloud Infrastructure API signature. In
the future, if Oracle modifies the method for signing requests, the version number will be
incremented and your company will be notified.
You also need the OCIDs for your tenancy and user. See Where to Get the Tenancy's OCID and
User's OCID.
3. Create the signature from the signing string, using your private key and the RSA-
SHA256 algorithm.
4. Add the resulting signature and other required information to the Authorization
header in the request.
See the remaining sections in this topic for details about these steps.
Authorization Header
The Oracle Cloud Infrastructure signature uses the "Signature" Authentication scheme (with
an Authorization header), and not the Signature HTTP header.
Required Headers
This section describes the headers that must be included in the signing string.
For GET and DELETE requests (when there's no content in the request body), the signing string
must include at least these headers:
For PUT and POST requests (when there's content in the request body), the signing string
must include at least these headers:
l (request-target)
l host
Important:
For PUT and POST requests, your client must compute the x-
content-sha256 and include it in the request and signing
string, even if the body is an empty string. Also, the
content-length is always required in the request and
signing string, even if the body is empty. Some HTTP clients
will not send the content-length if the body is empty, so
you must explicitly ensure your client sends it. If date and
x-date are both included, Oracle uses x-date.The x-date is
used to protect against the reuse of the signed portion of the
request (replay attacks).
For Object Storage PUT requests, the signing string must include at least these headers:
l (request-target)
l host
If the request also includes any of the other headers that are normally required for PUT
requests (see the list above), then those headers must also be included in the signing string.
The order of the headers in the signing string does not matter. Just make sure to specify the
order in the headers parameter in the Authorization header, as described in the draft-
cavage-http-signatures-05.
Important:
When forming the signing string, you must URL encode all parameters in the path and query
string (but not the headers) according to RFC 3986.
Key Identifier
Tenancy's OCID and User's OCID. An example keyId looks like this (wrapped to better fit the
page):
ocid1.tenancy.oc1..aaaaaaaauwjnv47knr7uuuvqar5bshnspi6xoxsfebh3vy72fi4swgrkvuvq/ocid1.user.oc1..aaaaaaa
aba3pv6wkcr4jqae5f44n2b2m2yt2j6rx32uzr4h25vqstifsfdsq/40:a4:f8:a0:40:4f:a3:2f:e0:fd:4e:b9:25:72:81:5f
Signing Algorithm
The signing algorithm must be RSA-SHA256, and you must set algorithm="rsa-sha256" in
the Authorization header (notice the quotation marks).
Signature Version
You should include version="1" in the Authorization header (notice the quotation marks).
If you do not, it's assumed that you're using whatever the current version is (which is version
1 at this time).
Example Header
Here's an example of the general syntax of the Authorization header (for a request with
content in the body):
Authorization: Signature version="1",keyId="<tenancy_ocid>/<user_ocid>/<key_
fingerprint>",algorithm="rsa-sha256",headers="(request-target) date x-content-sha256 content-type
content-length",signature="Base64(RSA-SHA256(<signing_string>))"
Test Values
Here's an example key pair, two example requests, and the resulting Authorization header
for each.
Z4UMR7EOcpfdUE9Hf3m/hs+FUR45uBJeDK1HSFHD8bHKD6kv8FPGfJTotc+2xjJw
oYi+1hqp1fIekaxsyQIDAQAB
-----END PUBLIC KEY-----
For the following GET request (line breaks inserted between query parameters for easier reading; also
notice the URL encoding as mentioned earlier):
GET https://iaas.us-phoenix-1.oraclecloud.com/20160918/instances
?availabilityDomain=Pjwf%3A%20PHX-AD-1
&compartmentId=ocid1.compartment.oc1..aaaaaaaam3we6vgnherjq5q2idnccdflvjsnog7mlr6rtdb25gilchfeyjxa
&displayName=TeamXInstances
&volumeId=ocid1.volume.oc1.phx.abyhqljrgvttnlx73nmrwfaux7kcvzfs3s66izvxf2h4lgvyndsdsnoiwr5q
Date: Thu, 05 Jan 2014 21:31:40 GMT
The signing string would be (line breaks inserted into the (request-target) header for easier reading):
enancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq/
ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3ryn
jq/20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34",algorithm="rsa-sha256
",signature="GBas7grhyrhSKHP6AVIj/h5/Vp8bd/peM79H9Wv8kjoaCivujVXlpbKLjMPe
DUhxkFIWtTtLBj3sUzaFj34XE6YZAHc9r2DmE4pMwOAy/kiITcZxa1oHPOeRheC0jP2dqbTll
8fmTZVwKZOKHYPtrLJIJQHJjNvxFWeHQjMaR7M="
POST https://iaas.us-phoenix-1.oraclecloud.com/20160918/volumeAttachments
Date: Thu, 05 Jan 2014 21:31:40 GMT
{
"compartmentId":
"ocid1.compartment.oc1..aaaaaaaam3we6vgnherjq5q2idnccdflvjsnog7mlr6rtdb25gilchfeyjxa",
"instanceId": "ocid1.instance.oc1.phx.abuw4ljrlsfiqw6vzzxb43vyypt4pkodawglp3wqxjqofakrwvou52gb6s5a",
"volumeId": "ocid1.volume.oc1.phx.abyhqljrgvttnlx73nmrwfaux7kcvzfs3s66izvxf2h4lgvyndsdsnoiwr5q"
}
Sample Code
This section shows the basic code for signing API requests.
Bash
.
# Version: 1.0.1
# Usage:
# oci-curl <host> <method> [file-to-send-as-body] <request-target> [extra-curl-args]
#
# ex:
# oci-curl iaas.us-ashburn-1.oraclecloud.com get "/20160918/instances?compartmentId=some-compartment-
ocid"
# oci-curl iaas.us-ashburn-1.oraclecloud.com post ./request.json "/20160918/vcns"
function oci-curl {
# TODO: update these values to your own
local tenancyId="ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq";
local authUserId="ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq";
local keyFingerprint="20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34";
local privateKeyPath="/Users/someuser/.oci/oci_api_key.pem";
local alg=rsa-sha256
local sigVersion="1"
local now="$(LC_ALL=C \date -u "+%a, %d %h %Y %H:%M:%S GMT")"
local host=$1
local method=$2
local extra_args
local keyId="$tenancyId/$authUserId/$keyFingerprint"
case $method in
"get" | "GET")
local target=$3
extra_args=("${@: 4}")
local curl_method="GET";
local request_method="get";
;;
"delete" | "DELETE")
local target=$3
extra_args=("${@: 4}")
local curl_method="DELETE";
local request_method="delete";
;;
"head" | "HEAD")
local target=$3
extra_args=("${@: 4}")
local curl_method="HEAD";
local request_method="head";
;;
"post" | "POST")
local body=$3
local target=$4
extra_args=("${@: 5}")
local curl_method="POST";
local request_method="post";
local content_sha256="$(openssl dgst -binary -sha256 < $body | openssl enc -e -base64)";
local content_type="application/json";
local content_length="$(wc -c < $body | xargs)";
;;
"put" | "PUT")
local body=$3
local target=$4
extra_args=("${@: 5}")
local curl_method="PUT"
local request_method="put"
local content_sha256="$(openssl dgst -binary -sha256 < $body | openssl enc -e -base64)";
local content_type="application/json";
local content_length="$(wc -c < $body | xargs)";
;;
# This line will url encode all special characters in the request target except "/", "?", "=", and "&",
since those characters are used
# in the request target to indicate path and query string structure. If you need to encode any of "/",
"?", "=", or "&", such as when
# used as part of a path value or query string key or value, you will need to do that yourself in the
request target you pass in.
encoded+="${o}"
done
echo "${encoded}"
}
An example of a request.json file that could be used with the preceding Bash code is shown
next:
{
"compartmentId": "some-compartment-id",
"displayName": "some-vcn-display-name",
"cidrBlock": "10.0.0.0/16"
}
C#
.
// Version 1.0.1
namespace Oracle.Oci
{
using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Security.Cryptography;
using System.Text;
//
// Nuget Package Manager Console: Install-Package BouncyCastle
// Nuget CLI: nuget install BouncyCastle
//
using Org.BouncyCastle.Crypto;
using Org.BouncyCastle.Crypto.Parameters;
using Org.BouncyCastle.OpenSsl;
using Org.BouncyCastle.Security;
"ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq";
var compartmentId =
"ocid1.compartment.oc1..aaaaaaaam3we6vgnherjq5q2idnccdflvjsnog7mlr6rtdb25gilchfeyjxa";
var userId = "ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq";
var fingerprint = "20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34";
var privateKeyPath = "private.pem";
var privateKeyPassphrase = "password";
signer.SignRequest(request);
ExecuteRequest(request);
request = (HttpWebRequest)WebRequest.Create(uri);
request.Method = "POST";
request.Accept = "application/json";
request.ContentType = "application/json";
request.Headers["x-content-sha256"] = Convert.ToBase64String(SHA256.Create().ComputeHash
(bytes));
{
stream.Write(bytes, 0, bytes.Length);
}
signer.SignRequest(request);
ExecuteRequest(request);
}
/// <summary>
/// Adds the necessary authorization header for signed requests to OCI services.
/// Documentation for request signatures can be found here: https://docs.us-phoenix-
1.oraclecloud.com/Content/API/Concepts/signingrequests.htm
/// </summary>
/// <param name="tenancyId">The tenancy OCID</param>
/// <param name="userId">The user OCID</param>
/// <param name="fingerprint">The fingerprint corresponding to the provided key</param>
/// <param name="privateKeyPath">Path to a PEM file containing a private key</param>
/// <param name="privateKeyPassphrase">An optional passphrase for the private key</param>
public RequestSigner(string tenancyId, string userId, string fingerprint, string
privateKeyPath, string privateKeyPassphrase="")
{
// This is the keyId for a key uploaded through the console
this.keyId = $"{tenancyId}/{userId}/{fingerprint}";
AsymmetricCipherKeyPair keyPair;
using (var fileStream = File.OpenText(privateKeyPath))
{
try {
keyPair = (AsymmetricCipherKeyPair)new PemReader(fileStream, new Password
(privateKeyPassphrase.ToCharArray())).ReadObject(); }
catch (InvalidCipherTextException) {
throw new ArgumentException("Incorrect passphrase for private key");
}
}
// for PUT and POST, if the body is empty we still must explicitly set content-length =
0 and x-content-sha256
// the caller may already do this, but we shouldn't require it since we can determine it
here
if (request.ContentLength <= 0 && (string.Equals(requestMethodUpper, "POST") ||
string.Equals(requestMethodUpper, "PUT")))
{
request.ContentLength = 0;
request.Headers["x-content-sha256"] = Convert.ToBase64String(SHA256.Create
().ComputeHash(new byte[0]));
}
newline = "\n";
}
/// <summary>
/// Implements Bouncy Castle's IPasswordFinder interface to allow opening password protected
private keys.
/// </summary>
public class Password : IPasswordFinder
{
private readonly char[] password;
Java
This sample omits the optional version field in the Authorization header.
/*
* @version 1.0.1
*
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>19.0</version>
</dependency>
*/
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.hash.Hashing;
/*
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5</version>
</dependency>
*/
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.client.methods.HttpRequestBase;
import org.apache.http.entity.ByteArrayEntity;
/*
<dependency>
<groupId>org.tomitribe</groupId>
<artifactId>tomitribe-http-signatures</artifactId>
<version>1.0</version>
</dependency>
*/
import org.apache.http.entity.StringEntity;
import org.tomitribe.auth.signatures.MissingRequiredHeaderException;
import org.tomitribe.auth.signatures.PEM;
import org.tomitribe.auth.signatures.Signature;
import org.tomitribe.auth.signatures.Signer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.UnsupportedEncodingException;
import java.net.URI;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.security.Key;
import java.security.PrivateKey;
import java.security.spec.InvalidKeySpecException;
import java.text.SimpleDateFormat;
import java.util.*;
import java.util.stream.Collectors;
/**
* This example creates a {@link RequestSigner}, then prints out the Authorization header
* that is inserted into the HttpGet object.
*
* <p>
* apiKey is the identifier for a key uploaded through the console.
* privateKeyFilename is the location of your private key (that matches the uploaded public key for
apiKey).
* </p>
*
* The signed HttpGet request is not executed, since instanceId does not map to a real instance.
*/
public class Signing {
public static void main(String[] args) throws UnsupportedEncodingException {
HttpRequestBase request;
"ocid1.compartment.oc1..aaaaaaaam3we6vgnherjq5q2idnccdflvjsnog7mlr6rtdb25gilchfeyjxa".replace(":",
"%3A"),
"TeamXInstances",
"ocid1.volume.oc1.phx.abyhqljrgvttnlx73nmrwfaux7kcvzfs3s66izvxf2h4lgvyndsdsnoiwr5q".replace(":", "%3A")
);
signer.signRequest(request);
System.out.println(uri);
System.out.println(request.getFirstHeader("Authorization"));
signer.signRequest(request);
System.out.println("\n" + uri);
System.out.println(request.getFirstHeader("Authorization"));
/**
* Load a {@link PrivateKey} from a file.
*/
private static PrivateKey loadPrivateKey(String privateKeyFilename) {
try (InputStream privateKeyStream = Files.newInputStream(Paths.get(privateKeyFilename))){
return PEM.readPrivateKey(privateKeyStream);
} catch (InvalidKeySpecException e) {
throw new RuntimeException("Invalid format for private key");
} catch (IOException e) {
throw new RuntimeException("Failed to load private key");
}
}
/**
* A light wrapper around https://github.com/tomitribe/http-signatures-java
*/
public static class RequestSigner {
private static final SimpleDateFormat DATE_FORMAT;
private static final String SIGNATURE_ALGORITHM = "rsa-sha256";
private static final Map<String, List<String>> REQUIRED_HEADERS;
static {
DATE_FORMAT = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss zzz", Locale.US);
DATE_FORMAT.setTimeZone(TimeZone.getTimeZone("GMT"));
REQUIRED_HEADERS = ImmutableMap.<String, List<String>>builder()
.put("get", ImmutableList.of("date", "(request-target)", "host"))
.put("head", ImmutableList.of("date", "(request-target)", "host"))
.put("delete", ImmutableList.of("date", "(request-target)", "host"))
.put("put", ImmutableList.of("date", "(request-target)", "host", "content-length",
"content-type", "x-content-sha256"))
.put("post", ImmutableList.of("date", "(request-target)", "host", "content-length",
"content-type", "x-content-sha256"))
.build();
}
private final Map<String, Signer> signers;
/**
* @param apiKey The identifier for a key uploaded through the console.
* @param privateKey The private key that matches the uploaded public key for the given apiKey.
*/
public RequestSigner(String apiKey, Key privateKey) {
this.signers = REQUIRED_HEADERS
.entrySet().stream()
.collect(Collectors.toMap(
entry -> entry.getKey(),
entry -> buildSigner(apiKey, privateKey, entry.getKey())));
}
/**
* Create a {@link Signer} that expects the headers for a given method.
* @param apiKey The identifier for a key uploaded through the console.
* @param privateKey The private key that matches the uploaded public key for the given apiKey.
* @param method HTTP verb for this signer
* @return
*/
protected Signer buildSigner(String apiKey, Key privateKey, String method) {
final Signature signature = new Signature(
apiKey, SIGNATURE_ALGORITHM, null, REQUIRED_HEADERS.get(method.toLowerCase()));
return new Signer(privateKey, signature);
}
/**
* Sign a request, optionally including additional headers in the signature.
*
* <ol>
* <li>If missing, insert the Date header (RFC 2822).</li>
* <li>If PUT or POST, insert any missing content-type, content-length, x-content-sha256</li>
* <li>Verify that all headers to be signed are present.</li>
* <li>Set the request's Authorization header to the computed signature.</li>
* </ol>
*
* @param request The request to sign
*/
public void signRequest(HttpRequestBase request) {
final String method = request.getMethod().toLowerCase();
// nothing to sign for options
if (method.equals("options")) {
return;
}
// supply content-type, content-length, and x-content-sha256 if missing (PUT and POST only)
if (method.equals("put") || method.equals("post")) {
if (!request.containsHeader("content-type")) {
request.addHeader("content-type", "application/json");
}
if (!request.containsHeader("content-length") || !request.containsHeader("x-content-
sha256")) {
byte[] body = getRequestBody((HttpEntityEnclosingRequestBase) request);
if (!request.containsHeader("content-length")) {
request.addHeader("content-length", Integer.toString(body.length));
}
if (!request.containsHeader("x-content-sha256")) {
request.addHeader("x-content-sha256", calculateSHA256(body));
}
}
}
/**
* Extract path and query string to build the (request-target) pseudo-header.
* For the URI "http://www.host.com/somePath?foo=bar" return "/somePath?foo=bar"
*/
private static String extractPath(URI uri) {
String path = uri.getRawPath();
String query = uri.getRawQuery();
if (query != null && !query.trim().isEmpty()) {
path = path + "?" + query;
}
return path;
}
/**
* Extract the headers required for signing from a {@link HttpRequestBase}, into a Map
* that can be passed to {@link RequestSigner#calculateSignature}.
*
* <p>
* Throws if a required header is missing, or if there are multiple values for a single header.
* </p>
*
* @param request The request to extract headers from.
*/
private static Map<String, String> extractHeadersToSign(HttpRequestBase request) {
List<String> headersToSign = REQUIRED_HEADERS.get(request.getMethod().toLowerCase());
if (headersToSign == null) {
throw new RuntimeException("Don't know how to sign method " + request.getMethod());
}
return headersToSign.stream()
// (request-target) is a pseudo-header
.filter(header -> !header.toLowerCase().equals("(request-target)"))
.collect(Collectors.toMap(
header -> header,
header -> {
if (!request.containsHeader(header)) {
throw new MissingRequiredHeaderException(header);
}
if (request.getHeaders(header).length > 1) {
throw new RuntimeException(
String.format("Expected one value for header %s", header));
}
return request.getFirstHeader(header).getValue();
}));
}
/**
* Wrapper around {@link Signer#sign}, returns the {@link Signature} as a String.
*
* @param method Request method (GET, POST, ...)
* @param path The path + query string for forming the (request-target) pseudo-header
* @param headers Headers to include in the signature.
*/
private String calculateSignature(String method, String path, Map<String, String> headers) {
Signer signer = this.signers.get(method);
if (signer == null) {
throw new RuntimeException("Don't know how to sign method " + method);
}
try {
return signer.sign(method, path, headers).toString();
} catch (IOException e) {
throw new RuntimeException("Failed to generate signature", e);
}
}
/**
* Calculate the Base64-encoded string representing the SHA256 of a request body
* @param body The request body to hash
*/
private String calculateSHA256(byte[] body) {
byte[] hash = Hashing.sha256().hashBytes(body).asBytes();
return Base64.getEncoder().encodeToString(hash);
}
/**
* Helper to safely extract a request body. Because an {@link HttpEntity} may not be
repeatable,
* this function ensures the entity is reset after reading. Null entities are treated as an
empty string.
*
* @param request A request with a (possibly null) {@link HttpEntity}
*/
private byte[] getRequestBody(HttpEntityEnclosingRequestBase request) {
HttpEntity entity = request.getEntity();
// null body is equivalent to an empty string
if (entity == null) {
return "".getBytes(StandardCharsets.UTF_8);
}
// May need to replace the request entity after consuming
boolean consumed = !entity.isRepeatable();
ByteArrayOutputStream content = new ByteArrayOutputStream();
try {
entity.writeTo(content);
} catch (IOException e) {
throw new RuntimeException("Failed to copy request body", e);
}
// Replace the now-consumed body with a copy of the content stream
byte[] body = content.toByteArray();
if (consumed) {
request.setEntity(new ByteArrayEntity(body));
}
return body;
}
}
NodeJS
/*
Version 1.0.1
Before running this example, install necessary dependencies by running:
npm install http-signature jssha
*/
var fs = require('fs');
var https = require('https');
var os = require('os');
var httpSignature = require('http-signature');
var jsSHA = require("jssha");
if(privateKeyPath.indexOf("~/") === 0) {
privateKeyPath = privateKeyPath.replace("~", os.homedir())
}
var privateKey = fs.readFileSync(privateKeyPath, 'ascii');
var headersToSign = [
"host",
"date",
"(request-target)"
];
request.setHeader("Content-Length", options.body.length);
request.setHeader("x-content-sha256", shaObj.getHash('B64'));
headersToSign = headersToSign.concat([
"content-type",
"content-length",
"x-content-sha256"
]);
}
httpSignature.sign(request, {
key: options.privateKey,
keyId: apiKeyId,
headers: headersToSign
});
return function(response) {
var responseBody = "";
response.on('data', function(chunk) {
responseBody += chunk;
});
response.on('end', function() {
callback(JSON.parse(responseBody));
});
}
}
var options = {
host: identityDomain,
path: "/20160918/users/" + encodeURIComponent(userId),
};
sign(request, {
privateKey: privateKey,
keyFingerprint: keyFingerprint,
tenancyId: tenancyId,
userId: authUserId
});
request.end();
};
var options = {
host: coreServicesDomain,
path: '/20160918/vcns',
method: 'POST',
headers: {
"Content-Type": "application/json",
}
};
sign(request, {
body: body,
privateKey: privateKey,
keyFingerprint: keyFingerprint,
tenancyId: tenancyId,
userId: authUserId
});
request.end(body);
};
getUser(authUserId, function(data) {
console.log(data);
console.log("\nCREATING VCN:");
Perl
This sample omits the optional version field in the Authorization header.
#!/usr/bin/perl
# Version 1.0.1
# Need the following:
# Modules - Authen::HTTP::Signature, DateTime, DateTime::Format::HTTP, Mozilla::CA, File::Slurp,
LWP::UserAgent, LWP::Protocol::https
# LWP::UserAgent and LWP::Protoco::https >= 6.06
use strict;
use warnings;
{
package OCISigner;
use Authen::HTTP::Signature;
use Digest::SHA qw(sha256_base64);
use DateTime;
use DateTime::Format::HTTP;
my @generic_headers = (
'date', '(request-target)', 'host'
);
my @body_headers = (
'content-length', 'content-type', 'x-content-sha256'
);
my @all_headers = (@generic_headers, @body_headers);
my %required_headers = (
get => \@generic_headers,
delete => \@generic_headers,
head => \@generic_headers,
post => \@all_headers,
put => \@all_headers
);
sub new {
my ( $class, $api_key, $private_key) = @_;
my $self = {
_api_key => $api_key,
_private_key => $private_key
};
bless $self, $class;
return $self;
}
sub sign_request {
my ( $self, $request ) = @_;
my $verb = lc($request->method);
my $sign_body = grep(/^$verb$/, ('post', 'put'));
$self->inject_missing_headers($request, $sign_body);
my $headers = $required_headers{$verb};
my $all_auth = Authen::HTTP::Signature->new(
key => $self->{_private_key},
request => $request,
key_id => $self->{_api_key},
headers => $headers,
);
$all_auth->sign();
}
sub inject_missing_headers {
my ( $self, $request, $sign_body ) = @_;
$request->header('content-type', 'application/json') unless $request->header('content-type');
$request->header('accept', '*/*') unless $request->header('accept');
my $class = 'DateTime::Format::HTTP';
$request->header('date', $class->format_datetime(DateTime->now)) unless $request->header('date');
sub compute_sha256 {
my ( $self, $content ) = @_;
my $digest = sha256_base64($content);
while (length($digest) % 4) {
$digest .= '=';
}
return $digest;
}
} # OCISigner
{
package OCIClient;
use LWP::UserAgent;
use Mozilla::CA;
sub new {
my ( $class, $api_key, $private_key ) = @_;
my $ua = LWP::UserAgent->new;
$ua->ssl_opts(
verify_hostname => 1,
SSL_ca_file => Mozilla::CA::SSL_ca_file()
);
my $self = {
_signer => OCISigner->new($api_key, $private_key),
_ua => $ua
};
bless $self, $class;
return $self;
}
sub make_request {
my ( $self, $request ) = @_;
print "Sending request\n";
$self->{_signer}->sign_request($request);
my $response = $self->{_ua}->request($request);
if ($response->is_success) {
my $message = $response->decoded_content;
print "Received reply: $message\n";
}
else {
print "HTTP GET error code: ", $response->code, "\n";
print "HTTP GET error message: ", $response->message, "\n";
}
}
} # OCIClient
my $api_key = "ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq/" .
"ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq/" .
"20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34";
PHP
.
<? php
// Version 1.0.0
//
// Dependencies:
// - PHP curl extension
// - Guzzle (composer require guzzlehttp/guzzle)
//
require 'vendor/autoload.php';
use Psr\Http\Message\RequestInterface;
use GuzzleHttp\HandlerStack;
use GuzzleHttp\Handler\CurlHandler;
use GuzzleHttp\Client;
use GuzzleHttp\Middleware;
$namespace = 'MyNamespace';
$bucket_name = 'MyBucket';
$file_to_upload = 'myfile.txt';
$key_id = "$tenancy_id/$user_id/$thumbprint";
return base64_encode($signature);
}
$content_type = $request->getHeader('Content-Type')[0];
$content_sha256 = base64_encode(hex2bin(hash("sha256", $request->getBody())));
return $request;
}
// EXAMPLE REQUESTS
$handler = new CurlHandler();
$stack = HandlerStack::create($handler);
// Create a VCN
echo "************************************".PHP_EOL;
echo "Creating VCN...".PHP_EOL;
echo "************************************".PHP_EOL;
$response = $client->put("https://objectstorage.$region.oraclecloud.com/n/$namespace/b/$bucket_
name/o/NewObject2", [ "body" => $body, 'headers' => ['Content-Type' => 'application/octet-stream']]);
echo "\nResponse:\n";
echo $response->getStatusCode().PHP_EOL;
echo $response->getBody().PHP_EOL;
?>
Python
This sample omits the optional version field in the Authorization header.
Important:
import base64
import email.utils
import hashlib
class SignedRequestAuth(requests.auth.AuthBase):
"""A requests auth instance that can be reused across requests"""
generic_headers = [
"date",
"(request-target)",
"host"
]
body_headers = [
"content-length",
"content-type",
"x-content-sha256",
]
required_headers = {
"get": generic_headers,
"head": generic_headers,
"delete": generic_headers,
"put": generic_headers + body_headers,
"post": generic_headers + body_headers
}
algorithm="rsa-sha256", headers=headers[:])
use_host = "host" in headers
self.signers[method] = (signer, use_host)
# Inject body headers for put/post requests, date for all requests
sign_body = verb in ["put", "post"]
self.inject_missing_headers(request, sign_body=sign_body)
if use_host:
host = six.moves.urllib.parse.urlparse(request.url).netloc
else:
host = None
signed_headers = signer.sign(
request.headers, host=host,
method=request.method, path=request.path_url)
request.headers.update(signed_headers)
return request
headers = {
"content-type": "application/json",
"date": email.utils.formatdate(usegmt=True),
# Uncomment to use a fixed date
# "date": "Thu, 05 Jan 2014 21:31:40 GMT"
}
"%3A")
)
response = requests.get(uri, auth=auth, headers=headers)
print(uri)
print(response.request.headers["Authorization"])
Ruby
require 'base64'
require 'digest'
require 'openssl'
require 'time'
require 'uri'
# Version 1.0.1
class Client
include HTTParty
attr_reader :signer
class Signer
class << self
attr_reader :headers
end
end
end
api_key = [
"ocid1.tenancy.oc1..aaaaaaaaba3pv6wkcr4jqae5f15p2b2m2yt2j6rx32uzr4h25vqstifsfdsq",
"ocid1.user.oc1..aaaaaaaat5nvwcna5j6aqzjcaty5eqbb6qt2jvpkanghtgdaqedqw3rynjq",
"20:3b:97:13:55:1c:5b:0d:d3:37:d8:50:4e:c5:3a:34"
].join("/")
private_key = OpenSSL::PKey::RSA.new(File.read("../sample-private-key"))
client = Client.new(api_key, private_key)
headers = {
# Uncomment to use a fixed date
# "date" => "Thu, 05 Jan 2014 21:31:40 GMT"
}
})
response = client.post(uri, headers: headers, body: body)
puts "\n" + uri
puts response.request.options[:headers]["authorization"]
puts response.response
API Reference
You can find the API reference documentation here:
l Audit
l Core Services API (covers Networking, Compute, and Block Volume)
l Database
l File Storage
l DNS Zone Management
l IAM
l Load Balancing
l Object Storage and Archive Storage
l Amazon S3 Compatibility API
API Errors
400 LimitExceeded The Oracle-defined limit for this tenancy for this
resource type would be exceeded by fulfilling this
request.
409 IncorrectState The requested state for the resource conflicts with its
current state.
412 NoEtagMatch The Etag specified in the request does not match the
Etag for the resource.
429 TooManyRequests You have issued too many requests to the Oracle Cloud
Infrastructure APIs in too short of an amount of time.
l Clock skew. This status code is returned if the client's clock is skewed more than 5
minutes from the server's clock. For more information, see Maximum Allowed
Client Clock Skew.
l API request signature error. This status code is returned if a required header is
missing from a signing string. For more information, see Request Signatures.
l Authorization error. Verify that the user is in a group that has the permissions to
work with resources in a compartment.
l Compartment or resource not found. Verify that the compartment or resource
exist and is referenced correctly. For example, this status code will be returned for
either of the following errors:
o CompartmentNotFound if a compartment doesn't exist
o VolumeNotFound if a volume doesn't exist
API key
A credential for securing requests to the Oracle Cloud Infrastructure REST API.
attach
Link a volume and instance together. Allows an instance to connect and mount the volume as a
hard drive.
availability domain
One or more isolated, fault-tolerant Oracle data centers that host cloud resources such as
instances, volumes, and subnets. A region contains several availability domains.
backend set
A logical entity defined by a list of backend servers, a load balancing policy, and a health check
policy.
A cloud infrastructure that allows you to utilize hosted physical hardware, as opposed to
traditional software-based virtual machines, ensuring a high level of security and performance.
A virtual disk that provides persistent storage space for instances in the cloud.
bucket
CHAP
A service that allows you to add block storage volumes to an instance in order to expand the
available storage on that resource.
cloud network
A virtual version of a traditional network—including CIDRs, subnets, route tables, and gateways—
on which your instance runs.
compartment
A collection of related resources that can be accessed only by certain groups that have been given
permission by an administrator in your organization.
Compute
A service that lets you provision and manage compute hosts, known as instances.
connect
CPE
A virtual representation of the edge router at your end of an IPSec VPN that connects your VCN
and on-premises network.
cross-connect
Used with Oracle Cloud Infrastructure FastConnect. In a colocation scenario, this is the physical
cable connecting your existing network to Oracle in the FastConnect location.
cross-connect group
Used with Oracle Cloud Infrastructure FastConnect. In a colocation scenario, this is a link
aggregation group (LAG) that contains at least one cross-connect.
customer-premises equipment
A virtual representation of the edge router at your end of an IPSec VPN that connects your VCN
and on-premises network.
DB System
A dedicated bare metal instance running Oracle Linux, optimized for running one or more Oracle
databases. A DB System is a Database Service resource.
DHCP options
Configuration information that is automatically provided to the instances when they boot up.
display name
A friendly name or description that helps you easily identify the resource.
DRG
An optional virtual router that you can add to your VCN to provide a path for private network
traffic between your VCN and on-premises network.
An optional virtual router that you can add to your VCN to provide a path for private network
traffic between your VCN and on-premises network.
ephemeral public IP
A public IP address (and related properties) that is temporary and exists for the life of the instance
it's assigned to. It can be assigned only to the primary private IP on a VNIC. Compare with
reserved public IP.
File System
group
A collection of users who all need a particular type of access to a set of resources or compartment.
guest OS
health check
IaaS
A service that allows customers to rapidly scale up or down their computer infrastructure
(computing, storage, or network).
IAM
The service for controlling authentication and authorization of users who need to use your cloud
resources. Also called "IAM".
The service for controlling authentication and authorization of users who need to use your cloud
resources. Also called "IAM".
identity provider
A service that provides identifying credentials and authentication for federated users.
IdP
Short for "identity provider", which is a service that provides identifying credentials and
authentication for federated users.
image
A template of a virtual hard drive that determines the operating system and other software for an
instance.
Infrastructure-as-a-Service
A service that allows customers to rapidly scale up or down their computer infrastructure
(computing, storage, or network).
instance
A Bare Metal Cloud compute host. The image used to launch the instance determines its operating
system and other software. The shape specified during the launch process determines the
number of CPUs and memory allocated to the instance.
internet gateway
An optional virtual router that you can add to your VCN. It provides a path for network traffic
between your VCN and the Internet.
IPSec connection
The secure connection between a dynamic routing gateway (DRG) and customer-premises
equipment (CPE), consisting of multiple IPSec tunnels. The IPSec connection is one of the
components forming a site-to-site VPN between a virtual cloud network (VCN) and your on-
premises network.
IQN
iSCSI
A TCP/IP based standard used for communication between a volume and attached instance.
key pair
A security mechanism consisting of a public key and a private key. Required (for example) for
Secure Shell (SSH) access to an instance.
listener
An entity that checks for incoming traffic on the load balancer's public floating IP address.
A component on a VCN for routing traffic to a locally peered VCN. "Local" peering means the two
VCNs are in the same region.
The process of connecting two VCNs in the same region so that their resources can communicate
without routing the traffic over the internet or through your on-premises network.
LPG
A component on a VCN for routing traffic to a locally peered VCN. "Local" peering means the two
VCNs are in the same region.
Mount Point
A directory from which a client may access a remote File Storage Service file system.
Mount Target
namespace
The logical entity that lets you own your personal bucket names. Bucket names need to be unique
within the context of a namespace, but bucket names can be repeated across namespaces.
object
Any type of data, regardless of content type, is stored as an object. The object is composed of the
object itself and metadata about the object. Each object is stored in a bucket.
OCID
An Oracle-assigned unique ID called an Oracle Cloud Identifier (OCID). This ID is included as part
of the resource's information in both the Console and API.
one-time password
A single-use Console password that Oracle assigns to a new user, or to an existing user who
requested a password reset.
An Oracle-assigned unique ID called an Oracle Cloud Identifier (OCID). This ID is included as part
of the resource's information in both the Console and API.
OTP
A single-use Console password that Oracle assigns to a new user, or to an existing user who
requested a password reset.
policy
A document in the IAM that specifies who has what type of access to your resources. It is used in
different ways: to mean an individual statement written in the policy language; to mean a
collection of statements in a single, named "policy" document (which has an Oracle Cloud ID
(OCID) assigned to it); and to mean the overall body of policies your organization uses to control
access to resources.
policy statement
Policies can contain one or more individual statements. Each statement gives a group a certain
type of access to certain resources in a particular compartment.
primary IP
The private IP that is automatically created and assigned to a VNIC during creation.
primary VNIC
The VNIC that is automatically created and attached to an instance during launch.
private IP
An object that contains a private IP address and related properties such as a hostname for DNS.
Each instance automatically comes with a primary private IP, and you can add secondary ones.
private subnet
public IP
An object that contains a public IP address and related properties. You control whether each
private IP on an instance has an assigned public IP. There are two types: reserved public IPs and
ephemeral public IPs.
public subnet
A subnet in which instances are allowed to have public IP addresses. When you launch an
instance in a public subnet, you specify whether the instance should have a public IP address.
region
reserved public IP
A public IP address (and related properties) that you create in your tenancy and assign to your
instances in a given region as you like. It persists in your tenancy until you delete it. It can be
assigned to any private IP on a given VNIC, not just the primary private IP. Compare with
ephemeral private IP.
resource
The cloud objects that your company's employees create and use when interacting with Oracle
Cloud Infrastructure.
route table
Virtual route table for your VCN that provides mapping for the traffic from subnets via gateways to
external destinations.
secondary IP address
An additional private IP you've added to a VNIC on an instance. Each VNIC automatically comes
with a primary private IP that cannot be removed.
secondary VNIC
An additional VNIC you've added to an instance. Each instance automatically comes with a
primary VNIC that cannot be removed.
security list
shape
A template that determines the number of CPUs and the amount of memory allocated to a newly
created instance.
statement
Policies can contain one or more individual statements. Each statement gives a group a certain
type of access to certain resources in a particular compartment.
subnet
Subdivision of your VCN used to separate your network into multiple smaller, distinct networks.
Swift password
Swift is the OpenStack object store service. A Swift password enables you to use an existing Swift
client with Oracle Cloud Infrastructure Object Storage.
tenancy
The root compartment that contains all of your organization's compartments and other Oracle
Cloud Infrastructure cloud resources.
tenant
The name assigned to a particular company's or organization's overall environment. Users provide
their tenant when signing in to the Console.
user
An individual employee or system that needs to manage or use your company's Oracle Cloud
Infrastructure resources.
VCN
A virtual version of a traditional network—including CIDRs, subnets, route tables, and gateways—
on which your instance runs.
virtual circuit
Used with Oracle Cloud Infrastructure FastConnect. An isolated network path that runs over one
or more physical network connections to provide a single, logical connection between the edge of
your existing network and Oracle Cloud Infrastructure.
A virtual version of a traditional network—including CIDRs, subnets, route tables, and gateways—
on which your instance runs.
virtual machine
A software-based emulation of a full computer that runs within a physical host computer.
A VNIC enables an instance to connect to a VCN and determines how the instance connects with
endpoints inside and outside the VCN. Each instance automatically comes with a primary VNIC,
and you can add secondary ones.
VM
A software-based emulation of a full computer that runs within a physical host computer.
VNIC
A VNIC enables an instance to connect to a VCN and determines how the instance connects with
endpoints inside and outside the VCN. Each instance automatically comes with a primary VNIC,
and you can add secondary ones.
volume
A detachable block storage device that allows you to dynamically expand the storage capacity of
an instance.
work request
29-Jan-2018 File Storage 20171215 File Storage: The File Storage service is now
available. For more information, see Overview
of File Storage.
24-Jan-2018 Networking 20160918 Public IPs: You can now create reserved
public IPs, and move them between your
instances in a given region. You can also
control whether a given VNIC has an
ephemeral public IP assigned to its primary
private IP. For more information, see Public IP
Addresses.
19-Dec-2017 IAM N/A All new tenancies are federated with Oracle
Identity Cloud Service. For more information,
see Frequently Asked Questions for Oracle
Identity Cloud Service Federated Users.
13-Dec-2017 Data Transfer N/A Data Transfer Service: New offline data
transfer solution that lets you migrate large
volumes of data to Oracle Cloud
Infrastructure. For more information, see
Overview of Data Transfer.
13-Dec-2017 N/A N/A Service limits: You can now see your
tenancy's service limits in the Console. For
more information, see Service Limits.
15-Nov-2017 Block Volume 20160918 Boot volumes: Now when you launch a
virtual machine (VM) or bare metal instance
based on an Oracle-provided image or custom
image, a new boot volume for the instance is
created in the same compartment. See Boot
Volumes for more information.
06-Nov-2017 Networking 20160918 Local VCN peering: You can connect two
VCNs in the same region so their resources can
communicate without routing the traffic over
the internet. For more information, see VCN
Peering.
20-Oct-2017 Block Volume 20160918 Cloned volumes: You can now make a copy
of an existing block volume without needing to
go through the backup and restore process.
See Cloning a Volume.
19-Oct-2017 Audit 20160918 Audit Log Retention Period: You can now
modify the log retention period for your
tenancy. See Setting Audit Log Retention
Period.
13-Oct-2017 Database 20160918 Managed backups: You can now use the
Console or the API to manage backups to
Oracle Cloud InfrastructureObject Storage. For
more information, see Backing Up to Oracle
Cloud Infrastructure Object Storage.
28-Sep-2017 Block Volume 20160918 Volumes: You can now create 16 TB volumes.
13-Sep-2017 Oracle Cloud 20160918 NTP server: Oracle Cloud Infrastructure now
Infrastructure offers a fully managed, secure, and highly
available NTP server that you can use to set
the date and time of your Compute and
Database instances from within your virtual
cloud network (VCN). For more information,
see Configuring the Oracle Cloud
Infrastructure NTP Server for an Instance.
06-Sep-2017 Load Balancing 20170115 Health Status API: You can view health
status indicators that leverage your health
check policies to report on the general health
of load balancers and their components.
23-Aug-2017 Object Storage 20160918 Amazon S3 Compatibility API: You can use
existing S3 tools (for example, SDK clients) to
modify your applications to work with Object
Storage, with minimal changes to these
applications.
06-Jul-2017 Networking 20160918 VCN IP address ranges: You can now create
a VCN using any valid IP address range. Oracle
recommends using one of the private IP
address ranges specified in RFC 1918. See
Allowed VCN Size and Address Ranges
13-Jun-2017 Load Balancing 20170115 Private load balancers: You can create a
local load balancer with a private IP address to
isolate your load balancer from the internet.
09-Jun-2017 Compute 20160918 Chef Knife plugin: The Chef Knife plugin can
be used as a tool for orchestrating server
deployments.
08-Jun-2017 Object Storage 20160918 Public buckets: You can mark buckets as
public, which means that there is anonymous,
unauthenticated access to bucket contents.
This does not include write access.
15-May- Block Volume 20160918 Volume size: When creating a volume, you
2017 can select a size in 1 GB increments between
50 GB and 2 TB. The default is 1024 GB. For
more information, see Creating a Volume.
28-Mar-2017 Object Storage 20160918 Multipart uploads: You can use multipart
uploads to upload an object in parts. For more
information, see Managing Multipart Uploads.
2-Mar-2017 Networking 20160918 DNS: The Internet and VCN Resolver is now
available. For more information, see DNS in
Your Virtual Cloud Network.
21-Feb-2017 Audit 20160918 Audit: The new Audit service is now available.
For more information, see Overview of Audit.
08-Feb-2017 Compute 20160918 CentOS images: Two CentOS images are now
available. For more information, see Oracle-
Provided Images.
26-Jan-2017 Load Balancing 20170115 Load Balancing Service: The new Load
Balancing Service is now available. You can
buy the service at the Oracle Store.