Data Management Admin Guide
Data Management Admin Guide
Data Management Admin Guide
E96322-49
Oracle Fusion Cloud EPM Administering Data Management for Oracle Enterprise Performance Management
Cloud,
E96322-49
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S.
Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software,
any programs embedded, installed, or activated on delivered hardware, and modifications of such programs)
and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end
users are "commercial computer software," "commercial computer software documentation," or "limited rights
data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental
regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation
of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated
software, any programs embedded, installed, or activated on delivered hardware, and modifications of such
programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and
limitations specified in the license contained in the applicable contract. The terms governing the U.S.
Government's use of Oracle cloud services are defined by the applicable contract for such services. No other
rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.
Oracle®, Java, and MySQL are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc,
and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Documentation Accessibility
Documentation Feedback
iii
Predefining a List of Profiles 2-14
Setting System-Level Profiles 2-15
Setting Application-Level Profiles 2-21
Setting User Level Profiles 2-27
Setting Security Options 2-29
Setting up Source Systems 2-35
Registering File-Based Source Systems 2-35
Registering Oracle ERP Cloud Source Systems 2-37
Deleting Registered Source Systems 2-39
Editing Registered Source System Details 2-39
Adding File-Based Data Load Definitions 2-40
Registering Target Applications 2-40
Reusing Target Application Names Multiple Times 2-43
Creating a Data Export File 2-44
Creating a Custom Target Application 2-51
Adding Lookup Dimensions 2-53
Defining Application Dimension Details 2-54
Defining Application Options for Planning 2-55
Defining Application Options for Essbase 2-62
Defining Application Options for Financial Consolidation and Close 2-69
Defining Application Options for Tax Reporting 2-79
Defining Application Options for Profitability and Cost Management 2-88
Deleting Registered Target Applications 2-88
Executing Business Rules When Loading Data 2-89
Loading Data Using an Incremental File Adapter 2-90
Loading Data to a Free Form Application 2-95
3 Integrating Data
Integrating Data Using a File 3-1
Integrating File-Based Data Process Description 3-1
Registering File-Based Source Systems 3-2
Registering Target Applications 3-4
Defining Import Formats for File-Based Mappings 3-6
Concatenating Source Dimensions for a File-Based Source 3-11
Loading Multiple Periods for EPM Cloud or File-Based Source Systems 3-11
Defining Data Load Rule Details for a File-Based Source System 3-12
Running Data Load Rules 3-13
Using Drilling Through 3-14
Creating the Drill Region 3-15
Drill Through Components 3-16
iv
Adding the Server Component for the Drill Through URL 3-16
Adding the Detail Component for the Drill Through URL 3-17
Viewing the Drill Through Results 3-19
Integrating Metadata 3-21
Loading Metadata Process Description 3-22
Metadata Load File Considerations 3-24
Registering a Target Application for the Class of Dimension or Dimension Type 3-25
Working with Metadata Batch Definitions 3-26
Integrating Oracle ERP Cloud Oracle General Ledger Applications 3-27
Integration Process Description 3-28
Configuring a Source Connection 3-29
Working with Import Formats 3-32
Defining Locations 3-33
Defining Category Mappings 3-35
Data Load Mapping 3-35
Adding Data Load Rules 3-37
Processing Oracle General Ledger Adjustment Periods 3-38
Adding Filters for Data Load Rules 3-40
Drilling Through to the Oracle ERP Cloud 3-42
Writing Back to the Oracle ERP Cloud 3-42
Writing Back Budgets to the Oracle ERP Cloud 3-43
Writing Back Actuals to the Oracle ERP Cloud - Oracle General Ledger 3-47
Integrating Budgetary Control 3-51
Loading Budgetary Control Budget Consumption Balances to the EPM Cloud Process
Description 3-52
Configuring a Connection to a Budgetary Control Source 3-53
Working with Import Formats 3-55
Defining Locations 3-56
Defining Category Mappings 3-58
Data Load Mapping 3-59
Adding Data Load Rules 3-60
Writing Back EPM Cloud Budget Balances to the Budgetary Control Process
Description 3-63
Working with Import Formats 3-65
Defining Locations 3-66
Defining Category Mappings 3-68
Data Load Mapping 3-68
Adding Data Load Rules 3-70
Viewing the EPM Cloud Budgets Loaded to Budgetary Control 3-73
Integrating Oracle NetSuite 3-76
Supported NSPB Sync SuiteApp Saved Searches 3-76
Process Description for Integrating Oracle NetSuite 3-78
v
Configuring a Source Connection to Oracle NetSuite 3-80
Creating an Oracle NetSuite Data Source 3-85
Applying Oracle NetSuite Application Filters 3-87
Adding Additional Filters to the Drill URL in the Import Format 3-87
Managing Periods in Oracle NetSuite 3-89
Filtering Oracle NetSuite Periods 3-89
Adding Import Formats for Oracle NetSuite Data Sources 3-90
Adding Data Load Rules for an Oracle NetSuite Data Source 3-91
Drilling Through to Oracle NetSuite 3-93
Defining Drill Through Parameters to Oracle NetSuite 3-93
Saved Search Requirements in the Drill Through 3-94
Adding the Drill Through URL 3-95
Integrating with the Oracle HCM Cloud 3-96
Process Description for Integrating Data from Oracle HCM Cloud 3-96
Updating Existing Oracle HCM Cloud Extracts 3-102
Configuring a Source Connection to an Oracle HCM Cloud Data Source Application 3-103
Importing Oracle HCM Cloud Extract Definitions to Oracle HCM Cloud 3-104
Importing the Oracle HCM Cloud Extract Definition 3-105
Importing the BI Publisher eText Templates 3-107
Submitting the Oracle HCM Cloud Extract Definition 3-109
Downloading the Oracle HCM Cloud Extract Definition 3-110
Creating an Oracle HCM Cloud Data Source Application 3-112
Editing Application Filters for the Oracle HCM Cloud Data Source Application 3-114
Adding Data Load Rules for an Oracle HCM Cloud Data Source Application 3-116
Integrating Oracle HCM Cloud Metadata 3-117
Loading Oracle HCM Cloud Metadata 3-118
Loading Data from the Oracle ERP Cloud 3-120
Process Description for Integrating Oracle ERP Cloud Using Prepackaged Queries 3-121
Configuring a Source Connection for an Oracle ERP Cloud Source System 3-122
Creating an Oracle ERP Cloud Data Source 3-123
Applying Application Filters to an Oracle ERP Cloud Data Source 3-123
Selecting Period Report Parameters from the Oracle ERP Cloud 3-126
Process Description for Integrating Oracle ERP Cloud Data Using a Custom Query 3-127
Security Role Requirements for Oracle ERP Cloud Integrations 3-136
Integration User Privileges 3-136
Integration User Predefined Roles 3-137
Integration User Custom Roles 3-137
Allowlist 3-137
Integrating Account Reconciliation Data 3-137
Integrating BAI and SWIFT MT940 Format Bank File Transactions and Balances 3-138
Integrating BAI Format Bank File or SWIFT MT940 Format Bank File Transactions 3-138
vi
Integrating BAI Format Bank File or SWIFT MT940 Balances 3-142
Adding a Transaction Matching Target Application 3-150
Aggregating Transaction Matching Data 3-152
Loading Reconciliation Compliance Transactions 3-155
Loading Reconcilation Compliance Transactions Process Description 3-155
Adding a Reconciliation Compliance Transactions Application 3-156
Mapping Reconciliation Compliance Transactions Attributes to Dimensions 3-157
Creating an Import Format for Reconciliation Compliance Transactions 3-160
Defining the Location 3-161
Defining a Period for Reconciliation Compliance Transactions 3-161
Creating a Data Load Mapping for Reconciliation Compliance Transactions 3-162
Running the Data Load Rule for Reconciliation Compliance Transactions 3-163
Loading Exported Journal Entries 3-165
Integrating EPM Planning Projects and Oracle Fusion Cloud Project Management (Project
Management) 3-169
4 Integration Tasks
Working with Import Formats 4-1
Defining the Import Format 4-2
Viewing Import Format Information 4-2
Adding Import Formats 4-2
Deleting an Import Format 4-4
Querying by Example 4-4
Adding Import Expressions 4-4
Import Expression Types 4-5
Processing Order 4-7
Defining Import Formats for File-Based Mappings 4-7
Concatenating Source Dimensions for a File-Based Source 4-12
Using the Import Format Builder 4-13
All Data Types Data Loads 4-14
All Data Types Data Load Process Description 4-15
Setting the All Data Types Load Method 4-15
Setting the Import Format Data Types 4-19
Setting the Import Format for Multi-Column Data Types 4-20
Loading Multi-Column Numeric Data 4-25
Using Workflow Modes 4-28
Selecting the Workflow Mode 4-29
Defining Locations 4-30
Defining Period Mappings 4-33
Global Mappings 4-36
Application Mappings 4-36
vii
Source Mappings 4-37
Defining Category Mappings 4-39
Global Mappings 4-39
Application Mappings 4-40
Loading Data 4-40
Creating Member Mappings 4-40
Creating Mappings Using the Explicit Method 4-42
Creating Mappings Using the In Method 4-43
Creating Mappings Using the Between Method 4-43
Creating Mappings Using the Multi-Dimension Method 4-44
Using Special Characters in Multi-Dimensional Mapping 4-45
Creating Mappings Using the Like Method 4-45
Using Special Characters in the Source Value Expression for Like Mappings 4-46
Conditional Mapping using a Mapping Script 4-48
Using Mapping Scripts 4-49
Creating Mapping Scripts 4-50
Using Special Characters in the Target Value Expression 4-51
Format Mask Mapping for Target Values 4-51
Ignoring Member Mappings 4-54
Importing Member Mappings 4-55
Downloading an Excel Template (Mapping Template) 4-56
Importing Excel Mappings 4-58
Exporting Member Mappings 4-59
Deleting Member Mappings 4-60
Restoring Member Mappings 4-60
Defining Data Load Rules to Extract Data 4-60
Defining Data Load Rule Details 4-61
Defining Data Load Rule Details for a File-Based Source System 4-62
Defining Source Parameters for Planning and Essbase 4-63
Managing Data Load Rules 4-65
Editing Data Load Rules 4-65
Running Data Load Rules 4-65
Scheduling Data Load Rules 4-67
Checking the Data Load Rule Status 4-67
Deleting Data Load Rules 4-67
Working with Target Options 4-68
Creating Custom Options 4-68
Using the Data Load Workbench 4-68
Workflow Grid 4-69
Processing Data 4-69
Using the Workbench Data Grid 4-72
viii
Viewing Process Details 4-77
Using Excel Trial Balance Files to Import Data 4-79
Text Trial Balance Files Versus Excel Trial Balance Files 4-79
Downloading an Excel Trial Balance Template 4-79
Defining Excel Trial Balance Templates 4-79
Adding a Multiple Period Data Load Using Excel 4-80
Importing Excel Mapping 4-81
Loading Multiple Periods for EPM Cloud or File-Based Source Systems 4-82
Loading Periods as a Column from the Data File 4-83
Loading Journals to Financial Consolidation and Close 4-84
Loading Financial Consolidation and Close Journals Process Description 4-84
Working with Journal Loads and Import Formats 4-84
Working with Journal Loads and the Data Load Rule 4-86
Loading Journals from the Data Load Workbench 4-87
Loading Text-Based Journals 4-93
Service Instance Integrations 4-93
Setting up Business Process Instance Deployments 4-94
Loading Data Between Service Instances 4-95
Data Load, Synchronization and Write Back 4-96
Overview 4-96
Synchronizing and Writing Back Data 4-96
Data Synchronization 4-97
Write-Back 4-101
Logic Accounts 4-106
Overview of Logic Accounts 4-106
Creating a Logic Group 4-106
Creating Accounts in a Simple Logic Group 4-107
Logic Group Fields 4-107
Creating Complex Logic Accounts 4-113
Complex Logic Example 1: CashTx 4-115
Complex Logic Example 2: CashTx 4-116
Check Rules 4-117
Overview of Check Rules 4-117
Creating Check Rule Groups 4-118
Creating Check Rules 4-118
Rule Logic 4-120
Using the Rule Logic Editor to Create Check Rules 4-121
Creating Check Entity Groups 4-132
ix
5 Batch Processing
Working with Batch Definitions 5-1
Adding a Batch Group 5-6
Using Open Batches 5-6
Creating Open Batches 5-7
Creating an Open Batch to Run an Integration with E-Business Suite 5-9
Creating Open Batches for Multiple Periods 5-9
Executing Batches 5-12
Scheduling Jobs 5-12
Canceling a Scheduled Job 5-14
x
Dimension Map (Dimension) 6-14
Dimension Map For POV (Dimension, Cat, Per) 6-14
Process Monitor Reports 6-15
Process Monitor (Cat, Per) 6-15
Process Status Period Range (Cat, Start Per, End Per) 6-15
Process Monitor All Categories (Cat, Per) 6-16
Variance Reports 6-16
Account Chase Variance 6-16
Trial Balance Variance 6-16
xi
Employee Extract Definition Fields B-3
Entity Extract Definition Fields B-4
Job Extract Definition Fields B-4
Location Extract Definition Fields B-4
Position Extract Definition Fields B-5
xii
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support through My
Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info
or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
xiii
Documentation Feedback
Documentation Feedback
To provide feedback on this documentation, click the feedback button at the bottom of
the page in any Oracle Help Center topic. You can also send email to
[email protected].
xiv
1
Creating and Running an EPM Center of
Excellence
A best practice for EPM is to create a CoE (Center of Excellence).
An EPM CoE is a unified effort to ensure adoption and best practices. It drives transformation
in business processes related to performance management and the use of technology-
enabled solutions.
Cloud adoption can empower your organization to improve business agility and promote
innovative solutions. An EPM CoE oversees your cloud initiative, and it can help protect and
maintain your investment and promote effective use.
The EPM CoE team:
• Ensures cloud adoption, helping your organization get the most out of your Cloud EPM
investment
• Serves as a steering committee for best practices
• Leads EPM-related change management initiatives and drives transformation
All customers can benefit from an EPM CoE, including customers who have already
implemented EPM.
Learn More
• Watch the Cloud Customer Connect webinar: Creating and Running a Center of
Excellence (CoE) for Cloud EPM
• Watch the videos: Overview: EPM Center of Excellence and Creating a Center of
Excellence.
• See the business benefits and value proposition of an EPM CoE in Creating and Running
an EPM Center of Excellence.
1-1
Chapter 1
1-2
2
Using Data Management
You can also:
• integrate file-based data from an Enterprise Resource Planning (ERP) source system into
an Enterprise Performance Management (EPM) target application.
• drill through from the EPM target application and view data in the Enterprise Resource
Planning (ERP) source system.
• integrate Oracle General Ledger data with the Oracle Enterprise Performance
Management Cloud if you use Fusion Cloud Release 11 or higher.
Data Management also supports the Financials Accounting Hub (FAN) and the Financial
Accounting Hub Reporting Cloud Service (FRACAS) as part of its integration with the
Oracle General Ledger.
• load commitments, obligations, and expenditures from an Budgetary Control to Planning
and Planning Modules applications.
• load Human Resources data from Oracle Human Capital Management Cloud to use in
the Oracle Hyperion Workforce Planning business process of Planning Modules.
Data Management enables you to work with the following cloud services belonging to the
EPM Cloud:
• Planning Modules
• Planning
• Financial Consolidation and Close
• Account Reconciliation
• Profitability and Cost Management
• Tax Reporting
• Oracle Strategic Workforce Planning Cloud
2-1
Chapter 2
Drilling into Data
Drilling Through
You can drill through to the detail from the EPM target application and view data in the
Enterprise Resource Planning (ERP) source system. Predefined by administrators,
drill-through reports are available to users from specified individual member cells and
data cells. A cell can be associated with multiple drill-through reports. Cells that
contain drill-through reports can be indicated on the grid by a cell style.
2-2
Chapter 2
Using Data Management with Multiple Oracle Cloud EPM Deployments
2-3
Chapter 2
Using Data Management with Multiple Oracle Cloud EPM Deployments
• Account Reconciliation customers, who need to load bank file transaction (which
use a BAI or Bank Administration Institute file format or a SWIFT MT940 file
format) to the Transaction Matching module in Account Reconciliation, can use
Data Management as the integration mechanism. Data Management supports a
pre-built adapter for loading:
– BAI Format Bank File Transactions
– BAI Format Bank File Balances
– SWIFT MT940 Format Bank File Transactions
– SWIFT MT940 Format Bank File Balances
For more information, see Integrating BAI and SWIFT MT940 Format Bank File
Transactions and Balances.
Note:
In addition, any other file format that Data Management supports, can
also be used to import, map, and load to the Transaction Matching
module.
2-4
Chapter 2
Using Data Management with Multiple Oracle Cloud EPM Deployments
2-5
Chapter 2
Using Data Management with Multiple Oracle Cloud EPM Deployments
Locations. You can also add a Currency row in the import format and map it. See
Defining the Import Format.
Data Loads
Data Management supports a variety of ways for importing data from a range of
financial data sources, and then transforming and validating the data.
Data and exchange rates can be loaded to users who have data available from their
source in a text file.
Any file, whether it is a fixed width file or a delimited file, can be easily imported into
the Financial Consolidation and Close application. For example, you can take
consolidation features, which include out-of-the box translations, consolidation,
eliminations and adjustments from their source system, and map it into Data
Management by way of the import format feature. You can instruct the system where
the dimensions and data values reside in the file, as well as which rows to skip during
the data import. This feature allows a business user to easily import data from any
source by way of a file format, and requires limited technical help, if any, when loading
into an Financial Consolidation and Close application.
Other Considerations
1. Data may be loaded to Financial Consolidation and Close by way of Data
Management when the movement member is set to one of the following:
a. Level 0 descendants of FCCS_Mvmts_Subtotal
or
b. FCCS_OpeningBalanceAdjustment
or
c. Any level 0 members of customer specific hierarchies that are siblings of
FCCS_Movements. The hierarchies are siblings of FCCS_Movements (not the
level zero members).
2. Data loaded to Financial Consolidation and Close can be only at the base level.
3. Drill through from an Financial Consolidation and Close web form or Smart View
on Financial Consolidation and Close is supported.
4. Data loaded from Data Management to Financial Consolidation and Close is
summarized based on the dimensionality in Data Management, and this
summarized data is loaded to Financial Consolidation and Close. Any
computations or consolidation logic can only be performed within Financial
Consolidation and Close.
5. Import formats support the addition of both "file" and Planning source types.
6. Data Management indicates if the data loaded to Financial Consolidation and
Close is "PTD" or "YTD." If data is specified as YTD, Financial Consolidation and
Close performs any necessary computations as if that data needs to be translated
to PTD.
2-6
Chapter 2
Using Data Management with Multiple Oracle Cloud EPM Deployments
• Balance data or exchange rates may be loaded to the Tax Reporting application using a
file. (Data and exchange rates cannot be loaded by way of the same file.) In addition,
balance data from the Oracle Financials Cloud may also be directly integrated to the Tax
Reporting application. At this time, exchange rate loading from the Oracle ERP Cloud is
not supported.
• Data is loaded to Tax Reporting at the summary account level. Line item detail is not
supported in the Tax Reporting.
• Journals are not supported in Tax Reporting at this time. In Data Management, only
"data" load types are supported for Tax Reporting applications.
• Drill through from an Tax Reporting web form or Oracle Smart View for Office
(dynamically linked to Tax Reporting) to Data Management is supported.
• Drill through to an Tax Reporting web form from Data Management is only available when
Tax Reporting includes a URL that can be called from Data Management.
• Drill through functionality is not supported for exchange rates data.
• Data loaded from Data Management to Tax Reporting is summarized based on the
dimensionality in Data Management, and this summarized data is loaded to Tax
Reporting. Any computations or consolidation logic is only performed within Tax
Reporting.
• The Tax Reporting supports "YTD" data only, and consequently no data is modified when
it has been loaded.
• When data load rules are executed, there are two export modes for loading data to the
target Tax Reporting application:
– Merge—By default, all data load is processed in the Merge mode. If data already
existed in the application, the system overwrites the existing data with the new data
from the load file. If data does not exist, the new data will be created.
– Replace—The system first clears any existing data in the application for those
referenced in the data load file. Then the system performs the data load in Merge
mode.
Note:
In Replace mode, before the first record for a specific Scenario/Year/Period/
Entity/Mapped Data Source is encountered, the entire combination of data
for that Scenario, Year, Period, Entity, and Mapped Data Source is cleared,
whether entered manually or previously loaded. Note that when you have a
year of data in the Planning application, but are only loading a single
month, this option clears the entire year before performing the load.
• If you need to consolidate all entities as part of the data load process, in Data
Management, use the Check Entity Group option (see Creating Check Entity Groups.)
• The "ownership of data" feature in Tax Reporting is not supported in this release.
• Oracle Hyperion Financial Data Quality Management, Enterprise Edition can be used as
a primary gateway to integrate on-premise and Tax Reporting based applications. This
feature enables customers to adapt cloud deployments into their existing EPM portfolio.
• The rundatarule command of the EPM Automate utility, which executes a Data
Management data load rule based on the start period and end period, can be executed
for an Tax Reporting application.
2-7
Chapter 2
Navigating Data Management
• Data Management can be used to move data between service instances. This
means you can move data between Tax Reporting applications, or Tax Reporting
data to and from other Oracle Enterprise Performance Management Cloud
services.
• To load data to actual currency rather than entity currency when the currency is
fixed, set the currency in the Functional Currency field in the Location option.
SeeDefining Locations. You can also add a Currency row in the import format and
map it. See Defining the Import Format.
• After completing a data load cycle in Tax Reporting, data can be written out to a
text file created in a custom application for use in an external application, or to an
FDMEE (onpremise) location. When the custom application is defined, you can
export the file and download it using EPM Automate.
• For additional features available to Tax Reporting users, see the contents of this
guide.
Toolbars
The Standard toolbar is used for common Oracle Enterprise Performance
Management Cloud features. For additional information, see the Oracle Enterprise
Performance Management Workspace User’s Guide.
Help
When a selected Data Management option has context-sensitive help enabled for it,
click .
To view all other help topic specific to Data Management, see Administering Data
Management for Oracle Enterprise Performance Management Cloud.
2-8
Chapter 2
Navigating Data Management
For all other help, see the Oracle Cloud Help Center, which is the hub for accessing the latest
Oracle Enterprise Performance Management Cloud books, Help topics, and videos.
The URL of the Cloud Help Center:
Oracle Cloud Help Center.
Workflow Tasks
From the Workflow tab, you can integrate data using the following options:
• Data Load
– Data Load Workbench
– Data Load Rule
– Data Load Mapping
• Other
– Batch Execution
– Report Execution
– System Maintenance Tasks
• Monitor––Process Details
Setup Tasks
From the Setup tab you can administer source and target systems, specify report and batch
definitions, and manage application settings.
2-9
Chapter 2
Navigating Data Management
Available tasks:
• Configure
– System Settings
– Application Settings
– Security Settings
– User Settings
• Register
– Source System
– Target Application
• Integration Setup
– Import Format
– Location
– Period Mapping
– Category Mapping
• Data Load Setup
– Logic Group
– Check Rule Group
– Check Entity Group
2-10
Chapter 2
Navigating Data Management
Button Description
Customize your view. Options include:
• Columns—You can choose "Show All" to
display all columns or choose individual
columns to display.
• Detach—Use to detach the column grid.
When you detach the grid, the columns
display in their own window. To return to the
default view, select View, and then click
Attach or click Close.
• Reorder Columns—Use to change the order
of the columns that are displayed. You can
select a column, and then use the buttons on
the right to change the column order.
Use to detach the column grid. When you detach
the grid, the columns are displayed in their own
window. To return to the default view, select View,
and then click Attach or click Close.
Refreshes the data. For example, if you submit a
rule, refresh to see if the status changes from
Running to Complete.
Note:
Refresh does not
display on the Data
Management setup
screens.
Use to toggle the filter row. You can use the filter
row to enter text to filter the rows that are
displayed for a specific column.
You can enter text to filter on, if available, for a
specific column, and then click Enter. For
example, on the Process Details page, to view
only processes for a specific location, enter the
name of the location in the Location text box.
The Query by Example button displays on the
following Data Management setup screens: Target
Application, Import Format, Location, Data Load
Workbench, and Process Details.
To clear a filter, remove the text to filter by in the
text box, and then click Enter.
All text is case sensitive.
2-11
Chapter 2
Navigating Data Management
Button Description
Use to select an artifact on a page, such as a
target application, member, or general ledger
responsibility. When you click the Search button,
the Search and Select dialog box is displayed. In
some cases, available advanced search options
enable you to enter additional search conditions.
See Advanced Search Options.
2-12
Chapter 2
Navigating Data Management
By default, only the data rule assigned to the Category POV is displayed.
The Source System and Target Application are displayed as context information.
3. In Select Point of View, in Location, enter a full or partial string for the new location,
and then click OK.
4. Optional: To search on another location, from the Location drop-down, click More,
navigate to the location on the Search and Select: Location screen, and then click OK.
5. Optional: In Select Point of View, select Set as Default to use the new location as the
default location.
When a POV selection is set as a default, the user profile is updated with the default
selection.
6. Click OK.
2-13
Chapter 2
Administration Tasks
Note:
By default, when you display the Data Load Rule screen, you see all data
load rules only for the current POV Category. To show all data load rules for
all categories regardless of the POV Category, from Data Rule Summary,
select Show and then All Categories.
Administration Tasks
Set system, application, and user profiles. Use also to register source systems and
target applications.
Related Topics
• Predefining a List of Profiles
• Setting up Source Systems
• Registering Target Applications
2-14
Chapter 2
Administration Tasks
2-15
Chapter 2
Administration Tasks
2-16
Chapter 2
Administration Tasks
2-17
Chapter 2
Administration Tasks
2-18
Chapter 2
Administration Tasks
2-19
Chapter 2
Administration Tasks
2-20
Chapter 2
Administration Tasks
2-21
Chapter 2
Administration Tasks
4. Optional: To clear a setting, select the value, and then click Delete.
The value is removed but is deleted only when you save it.
5. Click Save.
Option Description
File Character Set Specify the method for mapping bit
combinations to characters for creating,
storing, and displaying text.
Each encoding has a name; for example,
UTF-8. Within an encoding, each character
maps to a specific bit combination; for
example, in UTF-8, uppercase A maps to
HEX41.
2-22
Chapter 2
Administration Tasks
Option Description
Log Level Specify the level of detail displayed in the logs.
A log level of 1 shows the least detail. A log
level of 5 shows the most detail.
Logs are displayed in Process Details by
selecting the Log link.
Check Report Precision Specify the total number of decimal digits for
rounding numbers, where the most important
digit is the left-most nonzero digit, and the
least important digit is the right-most known
digit.
This setting is not available in the Financial
Consolidation and Close.
Display Data Export Option "Override All Data" Specify Yes to display the Override All Data
option in the Export Mode drop-down located
on the Execute Rule screen.
When you select to override all data, the
following message is displayed "Warning:
Override All Data option will clear data for the
entire application. This is not limited to the
current Point of View. Do really want to
perform this action."
Enable Map Audit Set to Yes to create audit records for the Map
Monitor reports (Map Monitor for Location, and
Map Monitor for User). The default value for
this setting is No.
Access to Open Source Document When drilling down to the Data Management
landing page, this setting determines access
to the Open Source Document link (which
opens the entire file that was used to load
data).
• Administrator—Access to Open Source
Document link is restricted to the
administrator user.
• All Users—Access to the Open Source
Document link is available to all users. All
Users is the default setting.
Map Export Delimiter Sets the column delimiter value when
exporting member mappings.
Available delimiters are:
• ! (exclamation mark)
• , (comma)
• ; (semi-colon)
• | (pipe)
Map Export Excel File Format Select the Excel file format to use when
exporting member mappings:
• Excel 97-2003 Workbook (*.xls)
• Excel Macro-Enabled Workbook (*.xlsm)
2-23
Chapter 2
Administration Tasks
Option Description
Application Root Folder The Application Root folder is the root folder
for storing all files used to load data to the
EPM application. You can use a separate root
folder for each EPM application.
Based on this parameter, the system saves log
files, generated files and reports to the
appropriate folder under this root directory.
Parameters must be set up on the server
separately from this setup step.
Selecting the Create Application Folder
button instructs the system to create a folder
structure in the path specified in this field. The
folder structure is (with sub-folders in each):
data
inbox
outbox
When you specify a folder at the application
level, and select the Create Application
Folder option, a set of folders is created for
the application that includes a scripts folder.
Create scripts specific to an application in this
folder. This is especially important for event
scripts that are different between applications.
If you do not set up an application level folder,
then you cannot have different event scripts by
application.
If you specify a Universal Naming Convention
(UNC) path, share permissions on the folder
must allow access to the DCOM user for read/
write operations. Use a Universal Naming
Convention (UNC) path for the application root
folder when Oracle Hyperion Financial
Management and Data Management are on
separate servers. Contact your server
administrator to define the required UNC
definition.
If an UNC path is not entered, then you must
enter the absolute path. For example, specify
C:\Win-Ovu31e2bfie\fdmee
2-24
Chapter 2
Administration Tasks
Option Description
File Character Set Specify the method for mapping bit
combinations to characters for creating,
storing, and displaying text.
Each encoding has a name; for example,
UTF-8. Within an encoding, each character
maps to a specific bit combination; for
example, in UTF-8, uppercase A maps to
HEX41.
2-25
Chapter 2
Administration Tasks
Option Description
Check Report Precision Specify the total number of decimal digits for
rounding numbers, where the most important
digit is the left-most non-zero digit, and the
least important digit is the right-most known
digit.
Display Data Export Option "Override All Data" Display the "Override All Data" option on the
Export Mode drop-down on the Execute Rule
screen.
When you select to override all data, the
following message is displayed "Warning:
Override All Data option will clear data for the
entire application. This is not limited to the
current Point of View. Do really want to
perform this action."
Enable Map Audit Set to Yes to create audit records for the Map
Monitor reports (Map Monitor for Location, and
Map Monitor for User). The default value for
this setting is No.
Access to Open Source Document When drilling down to the Data Management
landing page, this setting determines access
to the Open Source Document link (which
opens the entire file that was used to load
data).
• Administrator—Access to Open Source
Document link is restricted to the
administrator user.
• All Users—Access to the Open Source
Document link is available to all users. All
Users is the default setting.
Map Export Delimiter Sets the column delimiter value when
exporting member mappings.
Available delimiters are:
• ! (exclamation mark)
• , (comma)
• ; (semi-colon)
• | (pipe)
Map Export Excel File Format Select the Excel file format to use when
exporting member mappings:
• Excel 97-2003 Workbook (*.xls)
• Excel Macro-Enabled Workbook (*.xlsm)
2-26
Chapter 2
Administration Tasks
Note:
When the Global mode is defined, then user level profiles for the POV are not
applicable.
2-27
Chapter 2
Administration Tasks
Option Description
File Character Set Specify the method for mapping bit
combinations to characters for creating,
storing, and displaying text.
Each encoding has a name; for example,
UTF-8. Within an encoding, each character
maps to a specific bit combination; for
example, in UTF-8, uppercase A maps to
HEX41.
2-28
Chapter 2
Administration Tasks
Option Description
Default Check Report Specify the type of Check Report to use as the
default check report at the user level. The
following are pre-seeded check reports, but
you can create a new one and specify it here:
• Check Report—Displays the results of the
validation rules for the current location
(pass or fail status).
• Check Report Period Range (Cat, Start
per, End per)—Displays the results of the
validation rules for a category and
selected periods.
• Check Report by Val. Entity Seq.—
Displays the results of the validation rules
for the current location (pass or fail
status); sorted by the sequence defined in
the validation entity group.
• Check Report with Warnings—Displays
the results of the validation rules for the
current location. Warnings are recorded in
validation rules and shown if warning
criteria are met. This does not show rules
that passed the validation.
This setting is not available in the Financial
Consolidation and Close.
Log Level Specify the level of detail displayed in the logs.
A log level of 1 shows the least detail. A log
level of 5 shows the most detail.
Logs are displayed in Process Details by
selecting the Log link.
Map Export Delimiter Sets the column delimiter value when
exporting member mappings.
Available delimiters are:
• ! (exclamation mark)
• , (comma)
• ; (semi-colon)
• | (pipe)
Map Export Excel File Format Select the Excel file format to use when
exporting member mappings:
• Excel 97-2003 Workbook (*.xls)
• Excel Macro-Enabled Workbook (*.xlsm)
2-29
Chapter 2
Administration Tasks
• Report security—Controls the reports that can be executed based on the report
groups assigned to a role.
• Batch security—Controls the batches that can be executed based on the batch
group assigned to a role.
• Location security—Controls access to locations.
Security levels apply to users. Role and Location security levels assigned to users are
compared at runtime. If a user is assigned a level that is equal to the level assigned to
the feature that the user is trying to access, the feature is available to the user.
In Data Management, administrators can control user access to locations using
location security.
2-30
Chapter 2
Administration Tasks
For information on assigning role security to batch groups, see Defining Batch Security.
6. Click Save.
Role Description
Create Integration Creates Data Management metadata and data
load rules.
Run Integration Runs Data Management data rules and fills out
runtime parameters. Can view transaction logs.
5. Click Save.
2-31
Chapter 2
Administration Tasks
When a user selects the Reports Execution, the list of available reports in the
Report Groups drop down is based on reports selected in role security.
2-32
Chapter 2
Administration Tasks
4. In Batch Group, from Select, select the batch group to which to assign batch security.
5. Click Save.
When a user selects the Batch Execution, the list of available reports in the Batch
Groups is based on batches selected in role security.
Note:
If the web services and batch scripts are used, then location security is still
maintained and enforced.
2-33
Chapter 2
Administration Tasks
Note:
Underscore is not supported in the prefix or suffix for group names.
6. In the Suffix field, select the name of the function or rule that the user is
provisioned to access.
Note:
Underscore is not supported in the prefix or suffix for group names.
2-34
Chapter 2
Administration Tasks
Note:
For information on viewing Data Management processes or jobs, see Viewing
Process Details.
2-35
Chapter 2
Administration Tasks
Note:
You must manually create a source system and initialize it, before artifacts
(such as the import format, or location) that use the source system are
imported. You must manually create a source system and initialize it, before
artifacts (such as the import format, or location) that use the source system
are imported using Migration import.
2-36
Chapter 2
Administration Tasks
Tutorial Video
4. Click Save.
After you add a source system, you can select the source system in the table, and the
details are displayed in the lower pane.
When you run the initialize process, the system imports all the applications that
match the filter condition. If no filter is provided, all applications are imported.
e. In ODI Context Code, enter the context code.
2-37
Chapter 2
Administration Tasks
The ODI context code refers to the context defined in Oracle Data Integrator. A
context groups the source and target connection information.
The default context code is GLOBAL.
4. Click Configure Source Connection.
The source connection configuration is used to store the Oracle ERP Cloud user
name and password. It also stores the WSDL connection for the same.
5. In User Name, enter the Oracle ERP Cloud user name.
Enter the name of the Oracle ERP Cloud user who launches the process requests
to send information between the EPM Cloud and the General Ledger. This user
must be assigned a General Ledger job role such as "Financial Analyst," "General
Accountant," or "General Accounting Manager."
Note:
Web services requires that you use your native user name and password
and not your single sign-on user name and password.
For customers using release 19.01 and earlier of the Oracle ERP Cloud, use the
old WSDL to make the connection and then specify the URL in the following
format:
If you are use a release URL format version earlier than R12, then replace the "fs"
with fin in the URL from the one that is used to log on into the Web Services URL.
If you are use a release URL format version later than R12, replace the "fs" with "
fa " in the URL from the one that is used to log or simply copy and paste the server
from the one that is used to log on into Web Services URL.
8. Click Configure.
The confirmation "Source system [source system name] has been configured
successfully" is displayed.
9. On the Source System screen, click Initialize.
2-38
Chapter 2
Administration Tasks
Initializing the source system fetches all metadata needed in Data Management, such as
ledgers, chart of accounts, and so on. It is also necessary to initialize the source system
when there are new additions, such as chart of accounts, segments/chartfields, ledgers,
and responsibilities in the source system.
The initialize process may take a while, and you can watch the progress in the job
console.
10. Click Save.
After you add a source system, select the source system in the table, and the details are
displayed in the lower pane.
Caution:
Use caution when deleting registered source systems! Part of the procedure for
deleting a source system is to delete the target application. When you delete the
target application, other artifacts are deleted. When you delete a registered source
system, the source system is removed from the Source System screen and all
import formats, locations, and data rules associated with the source system are
removed.
Tip:
To undo a deletion, click Cancel.
4. Click OK.
2-39
Chapter 2
Administration Tasks
2-40
Chapter 2
Administration Tasks
For example, Oracle Hyperion Financial Management customers can add Planning data,
or a Planningcustomer can add more Planning applications. In addition, this integration
enables you to write back from a cloud to an on-premise application or other external
reporting applications.
• Cloud—This application type refers to a service instance that uses a remote service to
integrate data. A business process instance is a self-contained unit often containing the
web server and the database application. In this case, connection information must be
selected between the two business process instances.
This feature enables EPM customers to adapt cloud deployments into their existing EPM
portfolio including
– Planning Modules
– Planning
– Financial Consolidation and Close
– Profitability and Cost Management
– Tax Reporting
Also see Using Data Management with Multiple Oracle Cloud EPM Deployments.
• Data Source—Refers to generic source and target entities that use the specific data
model of the source or target applications.
For example, NSPB Sync SuiteApp Save Search Results objects and Oracle Human
Capital Management Cloud extracts are considered data source applications.
• Dimension— Refers to the class of dimension or dimension type of a target application
when loading metadata. When you add a dimension, Data Management creates six-
dimension applications automatically: Account, Entity, Custom, Scenario, Version, and
Smartlist.
For more information on adding a dimension class or type as a target application, see
Registering a Target Application for the Class of Dimension or Dimension Type.
To register a target application:
1. Select the Setup tab, and then under Register, select Target Application.
2. In Target Application, in the summary grid, click Add, and then select the type of
deployment.
Available options are Cloud (for a Cloud deployment), Local (for an on-premise
deployment) or data source (for Oracle NetSuite or Oracle HCM Cloud deployments.)
For a Cloud deployment, go to step 3.
For a Local deployment, go to step 4.
3. To register a Cloud deployment, select Cloud and then complete the following steps on
the EPM Cloud Credentials screen:
a. In URL, specify the service URL that you use to log on to your service.
b. In User name, specify the user name for the Cloud Service application.
c. In Password, specify the password for the Cloud Service application.
d. In Domain, specify the domain name associated with the Cloud Service Application.
An identity domain controls the accounts of users who need access to service
instances. It also controls the features that authorized users can access. A service
instance belongs to an identity domain.
2-41
Chapter 2
Administration Tasks
Note:
Administrators can update the domain name that is presented to the
user, but Data Management requires the original domain name that
was provided when the customer signed up for the service. Alias
domain names cannot be used when setting up EPM Cloud
connections from Data Management.
4. Click OK.
5. In Application Details, enter the application name.
6. Click OK.
7. Click Refresh Members.
To refresh metadata and members from theEPM Cloud, you must click Refresh
Members.
2-42
Chapter 2
Administration Tasks
8. Click Save.
9. Define the dimension details.
Optional: If not all dimensions are displayed, click Refresh Metadata.
10. Select the application options.
Note:
For Financial Consolidation and Close application options, see Defining
Application Options for Financial Consolidation and Close.
A target application with a prefix is not backward compatible and cannot be migrated to a
17.10 or earlier release. Only a target application without a prefix name can be migrated to an
earlier release.
For information on adding a prefix, see Registering Target Applications.
2-43
Chapter 2
Administration Tasks
2-44
Chapter 2
Administration Tasks
Note:
Do not include the Amount column in the data file. If its included, you can delete
it after the application is created.
The name of the file is the name of the application so name the file appropriately.
2. On the Setup tab, under Register, select Target Application.
3. In the Target Application summary grid, click Add.
4. Select Local target application.
5. From Select Application, select Data Export to File.
6. From the Select screen, select the name of the source file.
7. To register a target application with the same name as an existing target application, in
Prefix, specify a prefix to make the name unique.
The prefix name is joined to the existing target application name. For example, if you
want to name a demo target application the same name as the existing "Vision"
application, you might assign the Demo prefix to designate the target application with a
unique name. In this case, Data Management joins the names to form the name
DemoVision.
2-45
Chapter 2
Administration Tasks
8. Click OK.
The system registers the application.
9. In Application Details, select the Dimension Details tab.
10. Edit the Dimension Name and Data Column Name as needed.
11. In Sequence, specify the order in which the maps are processed.
For example, when Account is set to 1, Product is set to 2, and Entity is set to 3,
then Data Management first processes the mapping for Account dimension,
followed by Product, and then by Entity.
12. In Column Order, specify the order of each column in the data export file.
By default, Data Management assigns the "Account" dimension as the first column
in the order.
13. Click Save.
14. Click the Application Options tab, and select any applicable properties and
values for the data export file.
For more information about data export to file properties, see Data Export to File
Properties.
2-46
Chapter 2
Administration Tasks
16. From Setup, then Integration Setup, and then Import Format, create an import format
based on the source type that you want to load to the target application.
The import format defines the layout of source data.
For more information, see Defining Import Formats for File-Based Mappings.
17. From Setup, then Integration Setup, and then Location, define the location to specify
where to load data.
For more information, see Defining Locations.
18. From Setup, Integration Setup, and then Period Mapping, define any periods.
You define period mappings to map your source system data to Period dimension
members in the target application. You can define period mappings at the Global,
Application, and Source System levels.
For more information, see Defining Period Mappings.
For information on loading multiple periods for file-based data, see Loading Multiple
Periods for EPM Cloud or File-Based Source Systems.
19. From Setup, then Integration Setup, and then Category Mapping, define any
categories to map source system data
For more information, see Defining Category Mappings.
20. From Workflow, then Data Load, and then Data Load Mapping, define data load
mapping to map source dimension members to their corresponding target application
dimension members.
You define the set of mappings for each combination of location, period, and category to
which you want to load data.
2-47
Chapter 2
Administration Tasks
You can execute the data load rule for one or more periods. You then verify that
the data was imported and transformed correctly, and then export the data to the
target application.
See the following data load rule topics:
• Executing data load rules—Running Data Load Rules.
• Schedule data load rules—Scheduling Data Load Rules.
Filter Description
Download File Name Enter the name of the output file.
You can use EPM Automate to download
the output file. The EPM Automate Utility
enables Service Administrators to remotely
perform Oracle Enterprise Performance
Management Cloud tasks.
For more information, see Working with
EPM Automate for Oracle Enterprise
Performance Management Cloud .
Column Delimited Select the character to use for delimiting
columns in the output file.
Available column delimiters are:
• ,
• |
• !
• ;
• :
The default delimiter is a comma (,).
2-48
Chapter 2
Administration Tasks
Filter Description
Workflow Mode Select the data workflow method.
Available options:
• Full—Data is processed in the
TDATASEG_T table and then copied to
the TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export,
and Check) and data can be viewed in
the Workbench.
Drill-down is supported.
• Full No Archive—Data is processed in
the TDATASEG_T table and then copied
to TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export,
and Check). Data can be viewed in the
Workbench but only after the import
step has been completed. Data is
deleted from TDATASEG at the end of
the workflow process.
Drill-down is not supported
• Simple— Data is processed in the
TDATASEG_T table and then exported
directly from the TDATASEG_T. table.
All data loads include both the import
and export steps.
Data is not validated and any
unmapped data result in load failure.
Maps are not archived in
TDATAMAPSEG.
Data cannot be viewed in the
Workbench.
Drill-down is not supported.
The Simple Workflow Mode is the
default mode.
File Character Set Specify the file character set.
The file character set determines the
method for mapping bit combinations to
characters for creating, storing, and
displaying text. Each encoding has a name;
for example, UTF-8.
UTF-8 is the default file character set.
2-49
Chapter 2
Administration Tasks
Filter Description
End of Line Character Select the operating system of the server
associated with the End Of Line (EOL)
character.
Valid options are
• Windows
• Linux
indicates end of line. Some text editors like
Notepad will not display files using Linux
EOL correctly.
An EOL character indicates the end of line.
Some text editors like Notepad do no
display files using the Linux EOL correctly.
For EPM Cloud, Data Management uses the
Linux EOL character as the defaults.
When customers views exported files in
Windows, the EOL shows on a single line.
Include Header Determines whether to include/exclude the
header record in the output file.
Select Yes to include the dimension name
in the header record. The default is Yes.
Select No to exclude the header record.
Export Attribute Columns Include attribute columns if you have some
static values to include in the export or file.
You can also use attribute columns if you
don't have a requirement to map the
source values. This will minimize the need
to define data load mapping.
Select Yes to include attribute columns.
Select No to exclude attribute columns.
Accumulate Data Summarizes Account data before export
and groups the results by one or more
column.
Select Yes to group the results by one or
more columns.
Select No to not group the results by one or
more columns.
The default value is Yes.
Sort Data Determine if data is sorted based on the
column order or not.
Select Yes to include columns.
Select No to exclude columns.
Pivot Dimension Pivoting changes the orientation of the
data in the export file enabling you to
aggregate the results and rotate rows into
columns. When you pivot between rows
and columns, the system moves the
selected dimension to the outermost row or
column on the opposite axis.
To use this feature, specify one dimension
name from the export file.
2-50
Chapter 2
Administration Tasks
2-51
Chapter 2
Administration Tasks
8. Select the Target Dimension Class or click to select the Target Dimension
Class for each dimension that is not defined in the application.
9. In Data Table Column Name, specify the table column name of the column in the
staging table (TDATASEG) where the dimension value is stored.
For example, when Account is set to 1, Product is set to 2, and Entity is set to 3,
then Data Management first processes the mapping for Account dimension,
followed by Product, and then by Entity.
11. In Prefix Dimension for Duplicates, enable or check (set to Yes) to prefix
member names by the dimension name.
The member name that is loaded is in the format [Dimension Name]@[Dimension
Member]. The prefixed dimension name is applied to all dimensions in the
application when this option is enabled. You cannot select this option if there is a
dimension in the target that has duplicate members. That is, only select this option
when the duplicate members are across dimensions.
If the application supports duplicate members and Prefix Dimension for Duplicates
is disabled or unchecked (set to no), then the user must specify the fully qualified
member names. Refer to the Essbase documentation for the fully qualified
member name format.
Note:
Planning does not support duplicate members.
13. In Enable export to file, select Yes to have Data Management create an output
data file for the custom target application.
A file is created in the outbox folder on the server with the following name format:
<LOCATION>_<SEQUENCE>.dat. For example, when the location is named
Texas and the next sequence is 16, then the file name is Texas_15.dat. The file is
created during the export step of the workflow process.
When the Enable export to file option is set to No, then the Export to Target
option is unavailable in the execution window.
14. In File Character Set, select the file character set.
2-52
Chapter 2
Administration Tasks
The file character set determines the method for mapping bit combinations to characters
for creating, storing, and displaying text. Each encoding has a name; for example, UTF-8.
Within an encoding, each character maps to a specific bit combination; for example, in
UTF-8, uppercase A maps to HEX41.
15. In Column Delimiter, select the character to use for delimiting columns in the output file.
Note:
The data table column name value must be a user-defined dimension greater
than the selected target dimension. For example, if the application has four
custom dimensions, select UD5.
7. Click OK.
2-53
Chapter 2
Administration Tasks
The lookup dimension is added to the dimension detail list with the target
dimension class name of "LOOKUP." To use the lookup dimension as a source
dimension, make sure you map it in the import format.
3. Select the Target Dimension Class or click to select the Target Dimension
Class for each dimension that is not defined in the application.
The dimension class is a property that is defined by the dimension type. For
example, if you have a Period dimension, the dimension class is also "Period". For
Essbase applications, you must specify the appropriate dimension class for
Account, Scenario, and Period. For Oracle Hyperion Public Sector Planning and
Budgeting applications, you must specify the dimension class for Employee,
Position, Job Code, Budget Item, and Element.
For information on Financial Consolidation and Close application dimensions, see
Financial Consolidation and Close Supported Dimensions.
4. Optional: Click Refresh Metadata to synchronize the application metadata from
the target application.
5. In Data Table Column Name, specify the table column name of the column in the
staging table (TDATASEG) where the dimension value is stored.
2-54
Chapter 2
Administration Tasks
Note:
The Drill Region simply defines the cells for which the drill icon is enabled in the
Data Forms and SmartView. It is recommended to use a minimum set of
dimensions to define the drill region. If a large number of dimensions are
included in the drill region, then the size of the drill region becomes large and
consumes system resources every time a form is rendered. For Planning
applications, use dimensions with small number of members like Scenario,
Year, Period, Version to define the drill region. For an Financial Consolidation
and Close application, use only the Data Source for defining the drill region.
If you want to define a more granular drill region with multiple dimensions, then use the
Calculation Manager Drill Region page to edit the region definition. You can use member
functions like iDescendants to define the region instead of individual members. You can
access the drill region by selecting Navigate and then Rules. Then click Database
Properties and expand the application and select the cube. Right click and select Drill
Through Definition. Edit only the Region definition and do not modify the XML content.
If you edit the drill region manually, set the Drill Region option to No in Application
Options.
8. Click Save.
The target application is ready for use with Data Management.
Tip:
To edit the dimension details, select the target application, then edit the application
or dimension details, as necessary. To filter applications on the Target Application
page, ensure that the filter row is displaying above the column headers. (Click
to toggle the filter row.) Then, enter the text to filter.
2-55
Chapter 2
Administration Tasks
Option Description
Load Method Select the method for loading data:
• Numeric—Loads numeric data only.
Planning data security is not enforced with
this method.
• all data types with security— Loads
Numeric, Text, Smartlist, Date data types.
If the Planning administrator loads data,
Planning data security is not enforced.
If a Planning non-administrator user loads
data, then Planning data security is
enforced.
Batch Size Specify the batch size used to write data to
file. The default size is 10,000.
Drill Region Select Yes, to create a drill region. A drillable
region is created to use the drill through
feature.
Note:
Data
Management
does not support
drilling through to
human resource
data.
2-56
Chapter 2
Administration Tasks
Option Description
Enable Drill from Summary Select Yes to drill down from summary
members in a Planning data form or report and
view the detail source data that make up the
number.
After enabling this option and loading the data
with the Create Drill Region option set to Yes,
the Drill icon is enabled at the summary level.
Drill is limited to 1000 descendant members
for a dimension.
Note:
If you Enable
Summary Drill,
do not include
the dimension
you want to drill
from the Parent
Members in the
drill region
definition. If you
absolutely need
to include this
dimension, then
disable the auto
drill region
creation and
then maintain the
drill region
manually using
Calculation
Manager user
interface. Use
Essbase
member function
like Descendants
to enumerate the
members you
want to include in
the drill region.
Summary drill is
available for local
service instances
only. It is not
available
between cross
service instances
or hybrid
deployments.
2-57
Chapter 2
Administration Tasks
Option Description
Purge Data File When a file-based data load to Essbase is
successful, specify whether to delete the data
file from the application outbox directory.
Select Yes to delete the file, or No to retain the
file.
Date Format Use the date format based on the locale
settings for your locale. For example, in the
United States, enter the date using the format
MM/DD/YY format.
Data Dimension for Auto-Increment Line Item Select the data dimension that matches the
data dimension you specified in Planning.
Used for loading incremental data using a
LINEITEM flag. See Loading Incremental Data
using the LINEITEM Flag to an EPM Cloud
Application.
Driver Dimension for Auto-Increment Line Item Select the driver dimension that matches the
driver dimension you specified in Planning.
Used for loading incremental data using a
LINEITEM flag. See Loading Incremental Data
using the LINEITEM Flag to an EPM Cloud
Application.
Member name may contain comma If the member name contains a comma, and
you are loading data to one of the following
services, set this option to Yes, and then load
the data:
• Planning Modules
• Planning
• Financial Consolidation and Close
• Tax Reporting
2-58
Chapter 2
Administration Tasks
Option Description
Workflow Mode Select the data workflow method.
Available options:
• Full—Data is processed in the
TDATASEG_T table and then copied to
the TDATASEG table.
All four Workbench processes are
supported (Import. Validate, Export, and
Check) and data can be viewed in the
Workbench.
Drill-down is supported.
The Full Workflow Mode is the default
mode.
• Full No Archive—Data is processed in the
TDATASEG_T table and then copied to
TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export, and
Check). Data can be viewed in the
Workbench but only after the import step
has been completed. Data is deleted from
TDATASEG at the end of the workflow
process.
Drill-down is not supported
• Simple— Data is processed in the
TDATASEG_T table and then exported
directly from the TDATASEG_T. table.
All data loads include both the import and
export steps.
Data is not validated and any unmapped
data result in load failure.
Maps are not archived in TDATAMAPSEG.
Data cannot be viewed in the Workbench.
Drill-down is not supported.
2-59
Chapter 2
Administration Tasks
Option Description
Enable Data Security for Admin Users Enables data validation when an administrative
user loads data. In this case, all data
validations in the data entry form are enforced
while loading data. Due to the enhanced
validations, the performance of data load will
be slower.
When the ‘Enable Data Security for Admin
Users’ is set to No (default value), then data
loads by the administrator are performed using
the Outline Load Utility (OLU). In this case,
performance is faster but you are unable to get
a detailed error report for any rows that are
ignored for any reason.
Note:
When running
any of the
Workforce
Incremental rules
(for example,
OWP_INCREME
NTAL PROCESS
DATA WITH
SYNCHRONIZE
DEFAULTS),
ensure that the
target option
Enable Data
Security for
Admin Users is
set to No. This
option can only
be set by an
administrator.
2-60
Chapter 2
Administration Tasks
Option Description
Display Validation Failure Reasons Enables you to report rejected data cells and
the rejection reason when you load data in a
data validation report.
Select Yes to report rejected data cells and the
rejected reason.
The limit for the number of rejections reported
is 100.
The data validation report is available for
download from the Process Details page by
clicking the Output link. In addition a copy of
the error file is stored in the Outbox folder.
Select No to not report rejected data cells and
the rejection reason.
Drill View from Smart View Specify the custom view of columns from the
Workbench when displaying customized
attribute dimension member names in Oracle
Smart View for Office drill-through reports.
Custom views are created and defined in the
Workbench option in Data Integration. When
the custom view has been defined and then
specified in the Drill View from Smart View
field, in Smart View you can click the drill-
through cell and select Open as New Sheet,
and the drill-through report opens based on
the view defined in the Workbench.
If no views are defined on the Application
Options page, the default view is used,
meaning that attribute dimensions do not
display customized member names in Smart
View.
For more information, see Defining a Custom
View in the Workbench
Default Import Mode Sets the default import mode when you
execute a data load rule in Data Management
or run an integration in Data Integration.
Available options:
• Append
• Replace
Default Export Mode Sets the default export mode when you
execute a data load rule in Data Management
or run an integration in Data Integration.
Available options:
• Accumulate (Add Data)
• Replace
• Merge Data (Store Data)
• Subtract
2-61
Chapter 2
Administration Tasks
Option Description
Load Method Select the method for loading data:
• Numeric—Loads numeric data only.
Planning data security is not enforced
with this method.
• all data types with security— Loads
Numeric, Text, Smartlist, Date data
types.
If the Planning administrator loads
data, Planning data security is not
enforced.
If a Planning non-administrator user
loads data, then Planning data security
is enforced.
Data is loaded in chunks of 500K cells.
Batch Size Specify the batch size used to write data to
file.
The default size is 10,000.
2-62
Chapter 2
Administration Tasks
Option Description
Drill Region Select Yes to create a drill region. A
drillable region is created to use the drill
through feature.
Note:
Data
Management
does not
support drilling
through to
human
resource data.
2-63
Chapter 2
Administration Tasks
Option Description
Enable Drill from Summary Select Yes to drill down from summary
members in a Planning data form or report
and view the detail source data that make
up the number.
After enabling this option and loading the
data with the Create Drill Region option set
to Yes, the Drill icon is enabled at the
summary level. Drill is limited to 1000
descendant members for a dimension.
Note:
Summary level
drill downs are
not available
for the
Scenario, Year,
and Period
dimensions. For
these
dimensions,
you must
perform a drill
through on the
leaf members.
Summary drill
is available for
local service
instances only.
It is not
available
between cross
service
instances or
hybrid
deployments.
2-64
Chapter 2
Administration Tasks
Option Description
Workflow Mode Select the data workflow method.
Available options:
• Full—Data is processed in the
TDATASEG_T table and then copied to
the TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export,
and Check) and data can be viewed in
the Workbench.
Drill-down is supported.
The Full Workflow mode is the default
mode.
• Full No Archive—Data is processed in
the TDATASEG_T table and then copied
to TDATASEG table.
All four Workbench processes are
supported (Import. Validate, Export,
and Check). Data can be viewed in the
Workbench but only after the import
step has been completed. Data is
deleted from TDATASEG at the end of
the workflow process.
Drill-down is not supported
• Simple— Data is processed in the
TDATASEG_T table and then exported
directly from the TDATASEG_T. table.
All data loads include both the import
and export steps.
Data is not validated and any
unmapped data result in load failure.
Maps are not archived in
TDATAMAPSEG.
Data cannot be viewed in the
Workbench.
Drill-down is not supported.
2-65
Chapter 2
Administration Tasks
Option Description
Enable Data Security for Admin Users Enables data validation when an
administrative user loads data. In this case,
all data validations in the data entry form
are enforced while loading data. Due to the
enhanced validations, the performance of
data load will be slower.
When the ‘Enable Data Security for Admin
Users’ is set to No (default value), then data
loads by the administrator are performed
using the Outline Load Utility (OLU). In this
case, performance is faster but you are
unable to get a detailed error report for
any rows that are ignored for any reason.
Note:
When running
any of the
Workforce
Incremental
rules (for
example,
OWP_INCREME
NTAL PROCESS
DATA WITH
SYNCHRONIZE
DEFAULTS),
ensure that
then target
option Enable
Data Security
for Admin
Users is set to
No. This option
can only be set
by an
administrator.
2-66
Chapter 2
Administration Tasks
Option Description
Display Validation Failure Reasons Enables you to report rejected data cells
and the rejection reason when you load
data in a data validation report.
Select Yes to report rejected data cells and
the rejected reason.
The limit for the number of rejections
reported is 100.
The data validation report is available for
download from the Process Details page by
clicking the Output link. In addition a copy
of the error file is stored in the Outbox
folder.
Select No to not report rejected data cells
and the rejection reason.
2-67
Chapter 2
Administration Tasks
Option Description
Drill View from Smart View Specify the custom view of columns from
the Workbench when displaying
customized attribute dimension member
names in Oracle Smart View for Office
drill-through reports.
Note:
When drilling
into Smart
View, Data
Integration uses
the last used
view on the
Drill landing
page. If no last
used view is
found, Data
Integration uses
the default
view selection
in this setting
2-68
Chapter 2
Administration Tasks
Option Description
Default Export Mode Sets the default export mode when you
execute a data load rule in Data
Management or run an integration in Data
Integration.
Available options:
• Accumulate (Add Data)
• Replace
• Merge Data (Store Data)
• Subtract
5. Click Save.
Table 2-9 Financial Consolidation and Close Application Options and Descriptions
Option Description
Load Type Defaults to "Data" for loading numeric data only.
Journal Status The journal status indicates the current state of
the journal. The status of a journal changes when
you create, submit, approve, reject, or post the
journal.
Available options:
• Working—Journal is created. It has been
saved, but it may be incomplete. For example,
more line items may need to be added.
• Posted—Journal adjustments are posted to
the database.
2-69
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Journal Type Select the type of journal to load.
Available options:
• Auto-Reversing—Loads an auto-reversing
journal that contains adjustments that need to
be reversed in the next period. That is, the
journal posts in the next period by reversing
the debit and credit.
• Regular—Load journals using the Replace
mode, which clears all data for a journal label
before loading the new journal data.
Journal Post As Select the method for posting journal entries:
Available options:
• Journal-to-date—A Journal-to-Date journal
entry carries forward from period to period,
from the first instance of the journal entry,
including a carry-forward across any
intervening year-ends. The only difference
between a Journal-to-Date entry and a Year-
to-Date entry is that in the first period of each
year, the data from Journal-to-Date entries in
the last period of the prior year are reversed.
For Year-to-Date entries, there are no
reversals in the first period of any year.
• Periodic—When you select the View member
FCCS_Periodic, when the journal entries are
posted, the data entered to the line detail is
summarized and posted to the Consol cube
based on the line detail POV. The data from
one posted journal entry does not overwrite
the data written from other posted journal
entries.
• Year-to-date—When you select the View
member FCCS_YTD_Input, you can enter a
year-to-date amount in the line detail debit /
credit fields. A Year-to-Date journal entry must
contain year-to-date entries on all detail lines.
When Year-to-Date journal entries are posted,
the appropriate periodic impact on the POV
across the entries is calculated and then
accumulated with any accumulation from
posted Periodic journal entries In the first
period of any year, the year-to-date View data
is the same as Periodic data.
In the first period of any year, the year-to-date
View data is the same as Periodic data.
In subsequent periods, the periodic calculated
data posted to the Periodic View member for
each unique POV equals the current period
year-to-date entries accumulated across all
Year-to-Date journal entries, minus the prior
period year-to-date entries accumulated
across all Year-to-Date journal entries.
2-70
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Create Drill Region Select Yes to create a drill region.
Drillable region definitions are used to define the
data that is loaded from a general ledger source
system and specify the data drillable to Data
Management.
In data grids and data forms, after the regions
have been loaded, cells that are drillable are
indicated by a light blue icon at the top left corner
of the cell. The cell context menu displays the
defined display name, which then opens the
specified URL.
A region definition load file consists of the
following information:
• Scenario, Year, Period, Entity, Account
• Display Name (for cell context menu) and
URL (to drill to)
Enable Zero Loading Select Yes to load 0 values during a multiple
period load.
2-71
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Enable Data Security for Admin Users Enables data validation when an administrative
user loads data. In this case, all data validations in
the data entry form are enforced while loading
data. Due to the enhanced validations, the
performance of data load will be slower.
When the ‘Enable Data Security for Admin Users’
is set to No (default value), then data loads by the
administrator are performed using the Outline
Load Utility (OLU). In this case, performance is
fast but you are unable to get a detailed error
report for any rows that are ignored for any
reason.
Note:
When running any of
the Workforce
Incremental rules
(for example,
OWP_INCREMENT
AL PROCESS DATA
WITH
SYNCHRONIZE
DEFAULTS), ensure
that then target
option Enable Data
Security for Admin
Users is set to No.
This option can only
be set by an
administrator.
2-72
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Enable Drill from Summary Select Yes to drill down from summary members
in a Planning data form or report and view the
detail source data that make up the number.
After enabling this option and loading the data with
the Create Drill Region option set to Yes, the Drill
icon is enabled at the summary level. Drill is
limited to 1000 descendant members for a
dimension.
Note:
Summary level drill
downs are not
available for the
Scenario, Year, and
Period dimensions.
For these
dimensions, you
must perform a drill
through on the leaf
members.
Summary drill is
available for local
service instances
only. It is not
available between
cross service
instances or hybrid
deployments.
2-73
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Multi-GAAP Specify the multi-GAAP dimension used to report
your financial statements in both local GAAP and
in IFRS or another GAAP.
This dimension tracks the local GAAP data input
as well as any GAAP adjustments.
Data Source Specify the data source dimension.
The default value is "FCCS_Managed Source."
Purge Data File When a file-based data load to Essbase is
successful, specify whether to delete the data file
from the application outbox directory. Select Yes to
delete the file, or No to retain the file.
Member name may contain comma If the member name contains a comma, and you
are loading data to one of the following services,
set this option to Yes, and then load the data:
• Planning Modules
• Planning
• Financial Consolidation and Close
• Tax Reporting
Workflow Select the data workflow method.
Available options:
• Full—Data is processed in the TDATASEG_T
table and then copied to the TDATASEG table.
All four Workbench processes are supported
(Import, Validate, Export, and Check) and
data can be viewed in the Workbench.
Drill-down is supported.
The Full Workflow mode is the default mode.
• Full No Archive—Data is processed in the
TDATASEG_T table and then copied to
TDATASEG table.
All four Workbench processes are supported
(Import. Validate, Export, and Check). Data
can be viewed in the Workbench but only
after the import step has been completed.
Data is deleted from TDATASEG at the end of
the workflow process.
Drill-down is not supported
• Simple— Data is processed in the
TDATASEG_T table and then exported
directly from the TDATASEG_T. table.
All data loads include both the import and
export steps.
Data is not validated and any unmapped data
result in load failure.
Maps are not archived in TDATAMAPSEG.
Data cannot be viewed in the Workbench.
Drill-down is not supported.
2-74
Chapter 2
Administration Tasks
Table 2-9 (Cont.) Financial Consolidation and Close Application Options and
Descriptions
Option Description
Drill View from Smart View Specify the custom view of columns from the
Workbench when displaying customized attribute
dimension member names in Oracle Smart View
for Office drill-through reports.
Note:
When drilling into
Smart View, Data
Integration uses the
last used view on the
Drill landing page. If
no last used view is
found, Data
Integration uses the
default view
selection in this
setting
2-75
Chapter 2
Administration Tasks
Note:
In addition to the system predefined dimensions, you can create up to four
additional Custom dimensions based on your application needs. Custom
dimensions are associated with the Account dimension and provide
additional detail for accounts. If the application is enabled with the Multi-
GAAP reporting option, you can create three Custom dimensions.
Dimension Member
Year All members in the Year dimension
The Year member is prefixed with " FY."
For example, the Year member 2016 shows
as "FY16."
Period Only base members
View The View dimension controls the
representation of data across time periods.
Valid views are "YTD" or "PTD."
For periodic data, the member
"FCCS_Periodic" is used, and for YTD,
"FCCS_YTD_Input" is used.
Currency Shows the members available under the
reporting currency parent using the parent
member "Input Currency."
Consolidation The Consolidation dimension enables the
users to report on the details used to
perform the various stages of the
consolidation process. Members are
associated with "FCCS_Entity Input."
Scenario Scenario contains the following members:
• Actual
• Budget (optional)
• Forecast
Entity All base members
Intercompany "FCCS_No Intercompany" or ICP_<ENTITY>.
If there is no intercompany value,
"FCCS_No Intercompany" is used.
Otherwise for an intercompany member,
the ICP_<ENTITY> format is used.
Account All base members
2-76
Chapter 2
Administration Tasks
Dimension Member
Movement No Movement
All base members under
FCCS_Mvmts_Subtotal including:
• FCCS_No Movement
• FCCS_No Movement
• FCCS_Movements
• FCCS_OpeningBalance
• FCCS_ClosingBalance
Data Source FCCS_Managed Data
Multi GAAP All base members including:
• IFRS (system)
• Local GAAP (system)
• IFRS Adjustments (system)
The default is " FCCS_Local GAAP. "
Custom1 All base members that are a valid
intersection for the account.
This dimension is based on the domain of
the custom dimension. If no member, use
"No_<Dimension Name>."
Custom2 All base members that are a valid
intersection for the account.
This dimension is based on the domain of
the custom dimension. If no member, use
"No_<Dimension Name>."
Custom3 All base members that are a valid
intersection for the account.
This dimension is based on the domain of
the custom dimension. If no member, use"
No_<Dimension Name>".
Custom4 All base members that are a valid
intersection for the account.
This dimension is based on the domain of
the custom dimension. If no member, use
"No_<Dimension Name>".
2-77
Chapter 2
Administration Tasks
• Entity
• From Currency
• To Currency
• Rates Essbase cube name
For example a sample Financial Consolidation and Close application file may have the
following values:
2-78
Chapter 2
Administration Tasks
Other dimensions are mapped from the selected POV or set in the target application option. It
is recommended that you map the target member for "View" to" FCCS_Periodic", and "Entity"
to "FCCS_Global Assumptions."
To load exchange rates to Oracle Hyperion Financial Management:
1. On the Setup tab, under Register, select Target Application.
2. In the Target Application summary grid, select a Financial Management target
application.
3. After defining the application details in Application Detail, select the Application
Options tab.
4. Specify the members for the following dimensions:
• Movement
• Multi GAAP
• Data Source
5. From Movement, select the member value for the movement dimension.
Available options:
• FCCS_Movements
• FCCS_CashChange
6. In Multi GAAP, select the member value for the multi-GAAP.
7. In Data Source, select the member value for the data source.
8. Click Save.
2-79
Chapter 2
Administration Tasks
Option Description
Load Type Defaults to "Data" for loading numeric data
only.
Create Drill Region Select Yes to create a drill region.
A drill region enables you to navigate from
data in the Tax Reporting application to its
corresponding source data. Data
Management loads enabled drill regions to
the Tax Reporting target application after
data is loaded and consolidated. A cell is
considered drillable in the target
application when it is associated with the
drill regions defined in the application.
Data Management creates drill region by
scenarios. For any cube, the name of the
drill region is FDMEE_<name of the
scenario member>.
Data Management also checks if a
dimension is enabled for the drill.
Members of enabled dimensions selected
in data loads, are included in the drill
region filter. If no dimensions are enabled,
the following dimensions are enabled by
default: Scenario, Version, Year, and
Period. You can enable additional
dimensions, and the subsequent data load
considers members of newly enabled
dimensions. If you disable any dimensions
which were previously included in a drill
region used for drill creation, members of
such dimensions are not deleted during the
subsequent data loads. If needed, you can
remove obsolete members manually.
To disable the drill region, select No.
2-80
Chapter 2
Administration Tasks
Option Description
Enable Drill from Summary Select Yes to drill down from summary
members in a Planning data form or report
and view the detail source data that make
up the number.
After enabling this option and loading the
data with the Create Drill Region option set
to Yes, the Drill icon is enabled at the
summary level. Drill is limited to 1000
descendant members for a dimension.
Note:
Summary level
drill downs are
not available
for the
Scenario, Year,
and Period
dimensions. For
these
dimensions,
you must
perform a drill
through on the
leaf members.
Summary drill
is available for
local service
instances only.
It is not
available
between cross
service
instances or
hybrid
deployments.
2-81
Chapter 2
Administration Tasks
Option Description
Movement Specify the movement dimension member
that indicates the automated cash flow
reporting dimension used through
hierarchies and system calculations. The
dimension member can be any valid base
member.
By default, the system provides members
in the Movement dimension to maintain
various types of cash flow data, and FX to
CTA calculations.
If no movement, then specify the member
as "FCCS_No Movement." Otherwise, select
the desired movement member.
• TRCS_BookClosing
• TRCS_TBClosing
• FCCS_No Movement
• FCCS_ClosingBalance
• TRCS_TARFMovements
• TRCS_ETRTotal
• TRCS_ClosingBVT
Multi-GAAP Specify the multi-GAAP dimension used to
report your financial statements in both
local GAAP and in IFRS or another GAAP.
This dimension tracks the local GAAP data
input as well as any GAAP adjustments.
Datasource Specify the data source dimension.
The default value is "FCCS_Managed
Source."
Purge Data File When a file-based data load to Essbase is
successful, specify whether to delete the
data file from the application outbox
directory. Select Yes to delete the file, or No
to retain the file.
2-82
Chapter 2
Administration Tasks
Option Description
Enable Data Security for Admin Users Enables data validation when an
administrative user loads data. In this case,
all data validations in the data entry form
are enforced while loading data. Due to the
enhanced validations, the performance of
data load is slower.
Note:
When running
any of the
Workforce
Incremental
rules (for
example,
OWP_INCREME
NTAL PROCESS
DATA WITH
SYNCHRONIZE
DEFAULTS),
ensure that
then target
option Enable
Data Security
for Admin
Users is set to
No. This option
can only be set
by an
administrator.
2-83
Chapter 2
Administration Tasks
Option Description
Jurisdiction Specify the jurisdiction dimension.
Any valid base member. The default
member is "TRCS_No Jurisdiction."
Member name may contain comma If the member name contains a comma,
and you are loading data to one of the
following services, set this option to Yes,
and then load the data:
• Planning Modules
• Planning
• Financial Consolidation and Close
• Tax Reporting
Workflow Mode Select the data workflow method.
Available options:
• Full—Data is processed in the
TDATASEG_T table and then copied to
the TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export,
and Check) and data can be viewed in
the Workbench.
Drill-down is supported.
The Full Workflow mode is the default
mode.
• Full No Archive—Data is processed in
the TDATASEG_T table and then copied
to TDATASEG table.
All four Workbench processes are
supported (Import, Validate, Export,
and Check). Data can be viewed in the
Workbench but only after the import
step has been completed. Data is
deleted from TDATASEG at the end of
the workflow process.
Drill-down is not supported
• Simple— Data is processed in the
TDATASEG_T table and then exported
directly from the TDATASEG_T. table.
All data loads include both the import
and export steps.
Data is not validated and any
unmapped data result in load failure.
Maps are not archived in
TDATAMAPSEG.
Data cannot be viewed in the
Workbench.
Drill-down is not supported.
2-84
Chapter 2
Administration Tasks
Option Description
Drill View from Smart View Specify the custom view of columns from
the Workbench when displaying
customized attribute dimension member
names in Oracle Smart View for Office
drill-through reports.
Note:
When drilling
into Smart
View, Data
Integration uses
the last used
view on the
Drill landing
page. If no last
used view is
found, Data
Integration uses
the default
view selection
in this setting
2-85
Chapter 2
Administration Tasks
Option Description
Default Export Mode Sets the default export mode when you
execute a data load rule in Data
Management or run an integration in Data
Integration.
Available options:
• Accumulate (Add Data)
• Replace
• Merge Data (Store Data)
• Subtract
2-86
Chapter 2
Administration Tasks
2-87
Chapter 2
Administration Tasks
Note:
Drill through functionality is not supported for exchange rates data.
2-88
Chapter 2
Administration Tasks
Note:
After a target application is deleted and the process has run successfully, use
the Target Application screen to set up the same application and redefine the
rules.
4. Click Save.
2-89
Chapter 2
Administration Tasks
4. In Rule name, specify the name of the business rule that has been defined for the
script.
5. From Script Scope, select which type of business rule script is processed and
executed first. The scope can be at the application or data rule level.
Available scopes:
• Application
• Data Rule
Consider the following when selecting the script scope:
• If the scope of one or more rules is Application, then they are all run in
sequential order.
• If the scope is Data Rule, only rules for the running data rule run in sequential
order.
• The Application scope rules does not run if a Data Rule scope exists for the
data load rule. However, if you run another rule against the same target
application, then the Application scope rule runs.
If the script scope is a data rule, select the specific data load rule from the Data
Load Rule drop-down.
6. From Data Load Rule, select the specific data load rule in which you want to run
the script.
The Data Load Rule is disabled when the script scope is "Application."
7. In Sequence, specify the numeric order in which to execute a script if multiple
scripts are to be executed and if scripts are at one script scope level (such as only
data rules, or only applications).
8. Click Save.
2-90
Chapter 2
Administration Tasks
the file import. The remaining data import processes stay the same as in a standard data load
for a file.
Considerations:
• The source data file must be a delimited data file.
• Data files used must contain a one-line header, which describes the delimited columns.
• Both numeric and non-numeric data can be loaded.
• Any deleted records between the two files is ignored. In this case, you must handle the
deleted records manually.
• If the file is missing (or you change the last ID to a non-existent run), the load completes
with an error.
• Sort options determine the level of performance using this feature. Sorting increases the
processing time. Pre-sorting the file makes the process faster.
• Only single period data loads are supported for an incremental load. Multi-period loads
are not supported.
• Drill down is not supported for incremental loads since incremental files are loaded in
Replace mode and only the last version of the file comparison is present in the staging
table.
As a workaround, you can load the same data file to another location using the full data
load method. In this case, you should import data only and not export it to the target
application.
• Copies of the source data file are archived for future comparison. Only the last 5 versions
are retained. Files are retained for a maximum of 60 days. If no incremental load is
performed for more than 60 days, then set the Last Process ID to 0 and perform the load.
Watch this tutorial video to learn more about loading and calculating incremental workforce
data:
2-91
Chapter 2
Administration Tasks
Use a prefix when the source system name you want to add is based on an
existing source system name. The prefix is joined to the existing name. For
example, if you want to name an incremental file source system the same name
as the existing one, you might assign your initials as the prefix.
7. Click OK.
8. Click Save.
The system creates the dimension details automatically.
2-92
Chapter 2
Administration Tasks
For example, you might select Delimited - Numeric Data as the format of the file.
14. From the File Delimiter drop-down, select a type of delimiter.
Available delimiters:
• comma (,)
• exclamation (!)
• semicolon (;)
• colon (:)
• pipe (|)
• tab
• tilde (~)
15. In Target, select the name of the target application.
Note:
Only single period loads are supported.
2-93
Chapter 2
Administration Tasks
21. From the POV bar, select the POV of the location for the data load rule.
Default period mappings default to the list of source application periods using the
application or global period mappings based on the period key. The list of source
periods is added as Year and Period filters.
The Explicit method for loading data is used when the granularity of the source
periods and target application periods are not the same.
26. Optional: In Import Format select the import format to use with the file so
override the import format. If the import format is unspecified, then the import
format from the location is used.
27. If the target system is a Planning application, from the Target Plan Type drop-
down, select the plan type of the target application.
28. Select the Source Filters tab.
29. In Source File, select the data file name that contains the data you are loading. It
may be the same one from which you created the data source application, or
another file that has data as well as the appropriate header.
Select the file that contains your data, as before. It may have the same name as
the original file, or it may have a new name. The differences in the file (i.e. the
incremental load file) is created automatically between the two files loaded. So if
file A.txt has 100 rows and file B.txt has 300 rows where the first 100 are identical,
your first load should select file A.txt when the ID is 0. The second load will be
against file B.txt and the ID automatically points to the load ID that was assigned
to A.
2-94
Chapter 2
Administration Tasks
30. In Incremental Processing Options, select the method for sorting data in the source
file.
Available options:
• Do not sort source file—Source file is compared as provided. This option assumes
that the source file is generated in the same order each time. In this case, the system
performs a file comparison, and then extracts the new and changed records. This
option makes the incremental file load perform faster.
• Sort source file—Source file is sorted before performing the file comparison for
changes. In this option the source file is first sorted. The sorted file is then compared
to the prior sorted version of this file. Sorting a large file consumes a lot system
resources and performs slower.
Note:
If you have rule that uses the Do Not Sort option and then switch to a Sort
option, then the first load will have invalid results since the files are in different
order. Subsequent runs load data correctly.
31. The Last Process ID shows the last run ID for the original source data file.
When the load is first run for the original data file, the Last Process ID shows the value
of 0.
When the load is run again, the Last Process ID shows the run number of the last load.
If the newly created file comparison version and the original data file shows no
differences, or the file is not found, the value of the Last Process ID is assigned to the
last load ID that ran successfully.
To reload all data, set the Last Process ID back to 0, and select a new source file to
reset the baseline.
32. View the data before exporting it.
2-95
Chapter 2
Administration Tasks
You can load these Free Form applications in Data Management and Data Integration,
but there are a number of considerations:
1. The Free Form application requires a minimum of three dimensions: Account,
Period, and Scenario. The application definition in Data Management and Data
Integration must have three dimensions with the dimension type of Account,
Period, and Scenario.
2. You must set up a period mapping so that the system knows where to load the
data. For example, you could set up period mapping with a period of Jan-20 which
is the period member created in a Free Form application. When you set up a
period mapping in Data Management and Data Integration, you enter a period
created in the Free Form application and a year entry so that it passes the user
interface validations for the period mapping. This is he case where the you don't
have to define a year dimension in your Free Form application, only a period.
3. You must specify a Scenario dimension, but in Planning, it can be anything. The
only requirement is that the dimension is classified as scenario on our side. You
then need to set up category mapping so that the process succeeds.
• Assign a dimension classification of Account for one of the dimensions.
• If you want to use the drill through functionality, then a "Scenario" dimension is
required. Assign a dimension classification of Scenario for one of the dimensions.
Note when a dimension is classified as Scenario, the Category mapping is used to
assign a target value, so data can be loaded to only one value. Select a dimension
that meets this requirement and define a Category mapping.
• If you want to use Check feature, then an "Entity" dimension is required. Assign a
dimension classification of Entity for one of the dimensions.
Applications of type ASO are not auto registered when they are created. Use the Data
Management Target Application page and select Application type of Essbase to
manually register the application.
2-96
3
Integrating Data
Related Topics
• Integrating Data Using a File
• Integrating Metadata
• Integrating Oracle ERP Cloud Oracle General Ledger Applications
• Integrating Budgetary Control
• Integrating Oracle NetSuite
• Integrating with the Oracle HCM Cloud
• Loading Data from the Oracle ERP Cloud
• Integrating Account Reconciliation Data
• Integrating EPM Planning Projects and Oracle Fusion Cloud Project Management
(Project Management)
3-1
Chapter 3
Integrating Data Using a File
3-2
Chapter 3
Integrating Data Using a File
• Description—The description that you entered when you registered the source system.
• Drill URL—The drill URL you entered when you registered the source system.
Note:
You must manually create a source system and initialize it, before artifacts (such as
the import format, or location) that use the source system are imported. You must
manually create a source system and initialize it, before artifacts (such as the import
format, or location) that use the source system are imported using Migration import.
3-3
Chapter 3
Integrating Data Using a File
Tutorial Video
4. Click Save.
After you add a source system, you can select the source system in the table, and
the details are displayed in the lower pane.
3-4
Chapter 3
Integrating Data Using a File
Note:
Administrators can update the domain name that is presented to the user,
but Data Management requires the original domain name that was provided
when the customer signed up for the service. Alias domain names cannot
be used when setting up EPM Cloud connections from Data Management.
3-5
Chapter 3
Integrating Data Using a File
a unique name. In this case, the Data Management joins the names to form
the name DemoVision.
h. Click OK.
4. Click OK.
5. In Application Details, enter the application name.
6. Click OK.
7. Click Refresh Members.
To refresh metadata and members from theEPM Cloud, you must click Refresh
Members.
8. Click Save.
9. Define the dimension details.
Optional: If not all dimensions are displayed, click Refresh Metadata.
10. Select the application options.
Note:
For Financial Consolidation and Close application options, see Defining
Application Options for Financial Consolidation and Close.
3-6
Chapter 3
Integrating Data Using a File
• Skip
• Currency
• Attribute
• Description
• Dimension Row
3-7
Chapter 3
Integrating Data Using a File
Note:
If you integrate an
Financial
Consolidation and
Closeor source with
an explicit period
mapping type, the
system stores Tax
Reporting the
mapping year
(SRCYEAR) and
mapping period
(SRCPERIOD) in the
ATTR2 column and
year in ATTR3
columns. For this
reason when
importing data from
Financial
Consolidation and
Close, attribute
columns ATTR2 and
ATTR3 should not be
used for any other
dimension
mappings.
Similarly when you
map a Movement
source attribute to
any target
dimension, the
system automatically
creates another map
for mapping the
3-8
Chapter 3
Integrating Data Using a File
Movement to the
ATTR1 column.
Note:
You may encounter
issues with loading
data if the currency
is not specified
correctly.
To define an import format for numeric data files with a fixed length:
Note:
For information about defining import formats for fixed length all data type data files,
see Setting the Import Format Data Types.
3-9
Chapter 3
Integrating Data Using a File
3. In the Import Format Detail grid, select the type of row to add from the Add drop-
down.
Available options:
• Skip Row
• Currency Row
• Attribute Row
• Description Row
• Dimension Row
4. In Start, specify where on the file the column starts.
5. In Length, enter the length of the column.
6. In Expression, enter the expression that overwrites the contents of the column.
When entering a constant, enter a starting position and length. Use a start position
of "1" and a length of "1."
See Adding Import Expressions.
7. Click Save.
To define an import format for delimited numeric data files:
Note:
For information about defining import formats for delimited all data type data
files, see Setting the Import Format Data Types.
3-10
Chapter 3
Integrating Data Using a File
3-11
Chapter 3
Integrating Data Using a File
Import Formats, you select the source period rows of Year and Period, so that they are
identified as columns in the file, and then map them to the appropriate dimension in
the target system. Then you run the data load rule and select a range of dates to load.
The range of dates can be based on a default or explicit period mapping type.
For example, in the following sample file, there is multiple period data, "Jan" and "Feb"
in a single data file.
E1,100,2016,Jan,USD,100
E2,100,2016,Jan,USD,200
E3,100,2016,Feb,USD,300
E4,100,2016,Feb,USD,400
In another example, if you select a Jan-March period range, and the file includes: Jan,
Feb, Mar, and Apr, then Data Management only loads Jan, Feb, and Mar.
E1,100,2016,Jan,USD,100
E2,100,2016,Jan,USD,200
E3,100,2016,Feb,USD,300
E4,100,2016,Feb,USD,400
E4,100,2016,Mar,USD,400
E4,100,2016,Mar,USD,400
E4,100,2016,Apr,USD,400
E4,100,2016,Apr,USD,400
Data Management loads the periods specified on the Execute Rule screen, and
ignores rows in the file that don't match what you selected to load.
Data Management currently supports data loads that have up to six plan types.
Planning can support three custom plan types and up to four Planning Modules
applications (Workforce, Capex, Project, Financials). You can enable any
combination of these applications. When you create an Planning Modules
3-12
Chapter 3
Integrating Data Using a File
application and you create more than two custom plan types, then you cannot support a
data load to all four applications.
If the target system is Financial Consolidation and Close, from the Target Cube drop-
down, select the data load cube type.
Available options:
• Consol
• Rates
5. Optional: In Import Format, if the file type is a multiple period text file (with contiguous
periods, or noncontiguous periods), select the import format to use with the file so you
can override the import format. For example, specify an import format for single and
multiple period data rules, which enables you to load single or multiple period files from
the same location. In this case, the import format selected must have the same target as
the location selected in the POV. If the import format is unspecified, then the import
format from the location is used.
The starting and ending periods selected for the rule determine the specific periods in the
file when loading a multiple period text file.
In the file, when amounts are unavailable for contiguous periods, then you can explicitly
map the respective amount columns to required periods in the data rule in Data Load
Mapping. When you execute the rule, the data is loaded to the periods as specified by
the explicit mapping.
6. Optional: Enter a description.
7. If necessary, select the Source Options and add or change any dimensional data.
8. Click Save.
Note:
In Financial Consolidation and Close for YTD data loads, data is stored in Periodic
view. In this case, the user must select this option so that a "pre-processing" is done
to convert the YTD data from the file to periodic data for loading purpose.
When you run a data load rule, you have several options:
3-13
Chapter 3
Integrating Data Using a File
Note:
When a data load rule is run for multiple periods, the export step occurs only
once for all periods.
• Import from Source—Data Management imports the data from the source
system, performs the necessary transformations, and exports the data to the Data
Management staging table.
Select this option only when:
– You are running a data load rule for the first time.
– Your data in the source system changed. For example, if you reviewed the
data in the staging table after the export, and it was necessary to modify data
in the source system.
In many cases, source system data may not change after you import the data from
the source the first time. In this case, it is not necessary to keep importing the data
if it has not changed.
When the source system data has changed, you need to recalculate the data.
Note:
Oracle E-Business Suite and source imports require a full refresh of data
load rules. The refresh only needs to be done once per chart of
accounts.
Note:
Select both options only when the data has changed in the source system
and to export the data directly to the target application.
3-14
Chapter 3
Integrating Data Using a File
You can through at the leaf level or at a summary level. When you drill down from summary,
you can view summary members in the Planning data form or reports and view the detail
source data that make up the number. To use this feature, select the Enable Drill from
Summary option on the Application Options tab. After enabling this option and loading the
data with the Create Drill Region option set to "Yes," the Drill icon is enabled at the summary
level. Drill is limited to 1000 descendant members for a dimension. When you perform a drill
down from summary, source data and target data are shown on separate tabs.
When drilling through from the EPM Cloud application, a landing page is displayed in a
separate workspace tab that shows all rows that comprise the amount from the selected cell
in the EPM Cloud application. From this landing page, you can open the source document or
continue to drill-through to the defined source system landing page.
Drill through based on a URL requires that you are connected to the server on which the data
resides. Drill through works only for data loaded through Data Management. In addition,
because drill through is available in the target application, data load mappings must have at
least one explicit mapping for the drill through to work.
Note:
Drill through functionality is not supported for exchange rates data loaded to:
• Financial Consolidation and Close
• Tax Reporting
• Planning Modules
• Planning
3-15
Chapter 3
Integrating Data Using a File
Version, Year, and Period. You can enable additional dimensions, and the subsequent
data load considers members of newly enabled dimensions. If you disable any
dimensions which were previously included in a drill region used for drill creation,
members of such dimensions are not deleted during the subsequent data loads. If
needed, you can remove obsolete members manually.
To add a drill region for the Data Management target application:
1. On the Setup tab, under Register, select Target Application.
2. In the Target Application summary grid, select the target application.
3. Select the Application Options tab.
4. In Drill Region, enter: Yes.
Note:
Administrators can set the drill region setting at the application level in
the Target Application option. Additionally, they can change the setting
for a specific target application in data load rules.
5. Click Save.
Note:
3-16
Chapter 3
Integrating Data Using a File
3-17
Chapter 3
Integrating Data Using a File
6. In the Mappings grid of the import format, map the columns in the source column
to the dimensions in the target application to which to drill through.
7. Click OK and then click Save.
3-18
Chapter 3
Integrating Data Using a File
3-19
Chapter 3
Integrating Data Using a File
In the next example, you can drill through to the sub-ledger that supports the
balance:
3-20
Chapter 3
Integrating Metadata
In the next example, you can view additional information associated with the balance:
Integrating Metadata
Data Management supports the loading of metadata from a flat file in the order provided in
the file. This feature allows customers to build a metadata load file in any format, from any
source, and load the metadata to an Oracle Enterprise Performance Management Cloud
environment. Using this approach, users can set property defaults during the load or mapping
process.
For example, Oracle Hyperion Workforce Planning customers can load employees, jobs,
organizations and other work structure and compensation related items from Oracle Human
Capital Management Cloud to the Planning.
3-21
Chapter 3
Integrating Metadata
Regular (such as Account and Entity), custom (such as Product), and Smart List
dimensions are only supported, and for the following services only:
• Planning Modules
• Planning
• Financial Consolidation and Close
• Tax Reporting
Note:
Loading metadata is only available for applications that are application type:
Planning. If the application type is Essbase, then use the Planning Outline
Load Utility to load metadata.
Note:
Profitability and Cost Management does not support the loading of metadata
by way of a file using Data Management.
3-22
Chapter 3
Integrating Metadata
To enable additional properties, add a row to the dimension metadata application. The
name of the row is the property or attribute name used in the Planning application.
4. Optional: To add a custom dimension (one designated as Generic in the Planning
application), in the target application, select the property name and enable the Select
Property field, and then map it to a Data Table Column Name value. Next create a
separate import format for each generic dimension. Then, in the dimension's data rule,
specify the dimension name (for example, Product, Movement) in the Dimension name of
the data rule's target options.
5. In Import Format, map the data from the metadata load file to the properties of the
dimensions in the EPM application. This allows users to import dimension members from
any file format. (The file must be "delimited - all data type" file type.)
For more information, see Defining Import Formats for File-Based Mappings.
6. Define the location to specify where to load data.
For more information, see Defining Locations.
7. In data load mapping, map, transform, or assign values to properties of a dimension to
the corresponding properties of the target application dimension member.
Properties are added as "dimensions" of a dimension application. For example, the Two
Pass Calculation property of Entity is added as a dimension and the flat file adds the
"yes" or "no" property on the load.
3-23
Chapter 3
Integrating Metadata
Note:
Checked dimensions in the dimension "application" are the ones loaded.
If you do not map them, the load will fail. There is no default if a mapping
is missing. To avoid loading a "field" such as alias, uncheck the check
box in the target application. To supply a single value for all loaded rows,
specify the value in the Expression field and map *to* for that dimension.
3-24
Chapter 3
Integrating Metadata
• When you load a member that already exists in the Planning application (for example, to
change a property) and a parent is not specified in the load file, the member is left under
the existing parent. If a new parent is specified, the member is moved under the new
parent.
• Only one dimension can be loaded per load file.
• The records are loaded one by one. If a record fails to load, its associated exception is
written to the exception file and the load process resumes with the next record.
• Metadata is loaded in the order provided in the file.
• Member names with parenthesis are treated as functions.
• When you load metadata using a data load rule to a Planning application, the export
mode parameter must be set to " Store Data".
Note:
The Refresh Metadata and Refresh Members options are not available when
adding dimensions.
3-25
Chapter 3
Integrating Metadata
Note:
Only an administrator can create batch definitions.
3-26
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
Files are not grouped by location in parallel mode.
For information on definition batch parameters, see Working with Batch Definitions.
For information on executing batches, see Executing Batches.
3-27
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
The Average Daily Balances (ADB) ledger is not supported in the current
integration.
Note:
Data Management also supports the Financials Accounting Hub (FAH) and
the Financial Accounting Hub Reporting Cloud Service (FAHRCS) as part of
its integration with the Oracle General Ledger.
Tutorial Video
3-28
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
6. Create category mapping for scenario dimension members in the EPM application to
which Oracle General Ledger balances are loaded.
See Defining Category Mappings in this section.
7. Define data load mapping to convert the chart of accounts values from the Oracle
General Ledger to dimension members during the transfer.
See Data Load Mapping in this section.
8. Define a data rule with the necessary filters and execute the rule.
A default filter is provided that includes all dimensions of the Essbase cube. The cube
may have duplicate members so fully qualified member names are required. The
Essbase cubes work off the Oracle General Ledger segments, and there is a one to
many relationships of Chart of Accounts to ledgers in the Oracle General Ledger.
Data Management creates filters when a rule is created. You can modify the filters as
needed but cannot delete them. (If the filters are deleted, Data Management recreates
the default values). For information about these filters, see Adding Filters for Data Load
Rules.
The process extracts and loads the data from Oracle ERP Cloud to Data Management.
See Adding Data Load Rules.
9. Optional: Write back the data to the Oracle ERP Cloud.
To write back data to Oracle ERP Cloud from an Planning or Planning Modules source
system, set up a data rule. In this case, the filters are applied against the Planning or
Planning Modules application.
Optionally, you can write back budget data from a Planning to a flat file using a custom
target application. This output file may be used to load data to any other application.
3-29
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
When you run the initialize process, the system imports all the applications
that match the filter condition. If no filters are provided, all applications are
imported.
3-30
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
Web services require that you use your native user name and password and
not your single sign-on user name and password.
Initializing the source system fetches all metadata needed in Data Management, such as
ledgers, chart of accounts, and so on. It is also necessary to initialize the source system
when there are new additions, such as chart of accounts, segments/ chartfields, ledgers,
and responsibilities in the source system.
The initialize process may take a while, and you can watch the progress in the job
console.
Note:
When re-initializing an Oracle General Ledger source, application period
mappings are reset/removed from the system. If specific period mappings are
required, then use the source period mapping tab to specify the period
mappings.
3-31
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
After you add a source system, select the source system in the table, and the
details are displayed in the lower pane.
The initialize process may take a while, so the user can watch the progress in the
job console.
Note:
Oracle General Ledger creates one Essbase cube per Chart of Account/
Calendar combination. In this case, you can use the same import format to
import data from Ledgers sharing this Chart of Accounts. Ledgers can be
specified as a filter in the data load rule.
You work with import formats on the Import Format screen, which consists of three
sections:
• Import Format Summary—Displays common information relevant to the source
and target applications.
• Import Format Detail—Enables you to add and maintain import format information.
• Import Format Mappings—Enables you to add and maintain import format
mapping information.
To add an import format for an Oracle General Ledger based source system:
1. On the Setup tab, under Integration Setup, select Import Format.
2. In the Import Format summary task bar, select Add.
In the upper grid of the Import Formats screen, a row is added.
3. In Name, enter a user-defined identifier for the import format.
You cannot modify the value in this field after a mapping has been created for this
import format.
4. In Description, enter a description of the import format.
5. In Source, select the Oracle General Ledger Chart of Accounts from the drop
down.
6. In Target, select the EPM Cloud target application.
7. Optional: In Expression, add any import expressions.
Data Management provides a set of powerful import expressions that enable it to
read and parse virtually any trial balance file into the Data Management database.
You enter advanced expressions in the Expression column of the field. Import
expressions operate on the value read from the import file.
3-32
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Defining Locations
A location is the level at which a data load is executed in Data Management. Each location is
assigned an import format. Data load mapping and data load rules are defined per location.
You define locations to specify where to load the data. Additionally, locations enable you to
use the same import format for more than one target application where the dimensionality of
the target applications is the same. However; if you are using multiple import formats, you
must define multiple locations.
Note:
You can create duplicate locations with the same source system and application
combination.
Note:
You must specify the budget currency of the control budget to which the budget
is written back.
3-33
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
accounts. Changes to a child or parent mapping table apply to all child and parent
locations.
Note:
If a location has a parent, the mappings are carried over to the child.
However; changes to mapping can only be performed on the parent
location.
7. Optional: In Logic Account Group, specify the logic account group to assign to
the location.
A logic group contains one or more logic accounts that are generated after a
source file is loaded. Logic accounts are calculated accounts that are derived from
the source data.
The list of values for a logic group is automatically filtered based on the Target
Application under which it was created.
8. Optional: In Check Entity Group, specify the check entity group to assign to the
location.
When a check entities group is assigned to the location, the check report runs for
all entities that are defined in the group. If no check entities group is assigned to
the location, the check report runs for each entity that was loaded to the target
system. Data Management check reports retrieve values directly from the target
system, Data Management source data, or Data Management converted data.
The list of values for a check entity group is automatically filtered based on the
target application under which it was created.
9. Optional: In Check Rule Group, specify the check rule group to assign to the
location.
System administrators use check rules to enforce data integrity. A set of check
rules is created within a check rule group, and the check rule group is assigned to
a location. Then, after data is loaded to the target system, a check report is
generated.
The list of values for a check rule group is automatically filtered based on the
target application under which it was created.
10. Click Save.
• To edit an existing location, select the location to modify, and then make
changes as necessary. Then, click Save.
• To delete a location, click Delete.
When a location is deleted, the location is removed from all other Data
Management screens, such as Data Load.
3-34
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Tip:
To filter by the location name, ensure that the filter row is displayed above the
column headers. (Click to toggle the filter row.) Then, enter the text to filter.
You can filter locations by target application using the drop down at the top of
the screen.
3-35
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
2. From the Dimensions drop-down, select the dimension that you want to map.
The "*" represents all values. Data load mappings should be based upon your
EPM Cloud requirements.
When there is no update to the Oracle General Ledger value prior to the load, it is
still necessary to create the data load mapping for the dimensions to instruct Data
Management to create the target values.
At a minimum, map values for the "Account" and "Entity" dimensions since those
are transferred from Oracle General Ledger.
If you are transferring additional chart segments you must provide a mapping for
each destination dimension.
3. In Source Value, specify the source dimension member to map to the target
dimension member.
To map all General Ledger accounts to EPM Cloud "as is" without any
modification, in Source Value, enter: *, and from Target Value, enter: *.
4. Select the Like tab.
5. In Source Value, enter: * to indicate that all values should use the mapping.
These are the values from the Oracle General Ledger Chart of Accounts. Enter the
values directly.
6. In Target Value, enter the value for the accounting scenario to use to load the
budget information.
3-36
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Enter the values that should be used in EPM Cloud to store the Oracle General Ledger
actual balances that are transferred.
Note:
If you are working with Account Reconciliation "source types," you can specify
either source system or sub-system (subledger) as a target value.
7. In Rule Name, enter the name of the data load rule used to transfer budget amounts to
the Oracle General Ledger.
Note:
Rules are evaluated in rule name order, alphabetically. Explicit rules have no
rule name. The hierarchy of evaluation is from Explicit to (In/Between/Multi) to
Like.
3-37
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
• Default—The Data Rule uses the Period Key and Prior Period Key defined in
Data Management to determine the source General Ledger periods mapped to
each Data Management period included in a Data Rule execution.
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source General Ledger periods mapped to
each Data Management period included in a data load rule execution. Explicit
period mappings enable support of additional Oracle General Ledger data
sources where periods are not defined by start and end dates.
• Click Save.
3-38
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
6. Click Add.
7. In Source Period Key, specify the last day of the month to be mapped from the Oracle
General Ledger source system.
Use the date format based on the locale settings for your locale. For example, in the
United States, enter the date using the MM/DD/YY format.
You can also click and browse to and select the source period key.
When you select the Source Period Key, Data Management populates the Source
Period and Source Period Year fields automatically.
8. In Adjustment period, specify the name of the adjustment period from the Oracle
General Ledger source.
For example, if the adjustment period from the Oracle General Ledger is Adj-Dec-16,
then enter: Adj-Dec-16 in this field.
9. In Target Period Key, specify the last day of the month to be mapped from the target
system.
Use the date format based on the locale settings for your locale. For example, in the
United States, enter the date using the MM/DD/YY format.
You can also click and browse to and select the target period key.
When you select the Target Period Key, Data Management populates the Target Period
Name, Target Period Month, and Target Period Year fields automatically.
11. On the Workflow tab, under Data Load, select Data Load Rule.
12. From the POV Bar, select the location to use for the data load rule.
Data load rules are processed within the context of a point of view. The default point of
view is selected automatically. The information for the point of view is shown in the POV
bar at the bottom of the screen.
13. Click Add.
14. In Name, enter the name of the data load rule.
The categories listed are those that you created in the Data Management setup.
See Defining Category Mappings.
16. In Period Mapping Type, select the period mapping type for each data rule.
Valid options:
• Default—The Data Rule uses the Period Key and Prior Period Key defined in Data
Management to determine the source General Ledger periods mapped to each Data
Management period included in a Data Rule execution.
3-39
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source General Ledger periods mapped to
each Data Management period included in a data load rule execution. Explicit
period mappings enable support of additional Oracle General Ledger data
sources where periods are not defined by start and end dates.
17. From Include Adjustment Period, select one of the following options for
processing adjustment periods:
• No—Adjustment periods are not processed. The system processes only
regular period mappings (as setup for "default" and "explicit" mappings). No is
the default option for processing adjustments.
• Yes—If Yes is selected, then the regular period and adjustment period are
included. If the adjustment period does not exist, then only the regular period
is processed.
• Yes (Adjustment Only)—If Yes (Adjustment Only) is selected, the system
processes the adjustment period only. However, if the adjustment period does
not exist, the system pulls the regular period instead.
Note:
3-40
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
Drill Through is only supported if you load leaf level data for Oracle General Ledger
Chart of Account segments. If you load summary level data, then drill through does
not work.
Note:
If you want to bring in encumbrance from Oracle General Ledger and combine it
with Actual in Oracle Enterprise Performance Management Cloud, modify the
default dimension filter in the data load rule to include not only Actual but also
Encumbrance.
• Click to display the Member Select screen and select a member using the
member selector. Then, click OK.
The Member Selector dialog box is displayed. The member selector enables you to view
and select members within a dimension. Expand and collapse members within a
dimension using the [+] and [-].
The Selector dialog box has two panes—all members in the dimension on the left and
selections on the right. The left pane, showing all members available in the dimension,
displays the member name and a short description, if available. The right pane, showing
selections, displays the member name and the selection type.
You can use the V button above each pane to change the columns in the member
selector.
3-41
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
Assign filters for dimension. If you do not assign filters, numbers from the
summary members are also retrieved.
and click .
c. To add special options for the member, click and select an option.
In the member options, "I" indicates inclusive. For example, "IChildren" adds
all children for the member, including the selected member, and
"IDescendants" adds all the descendants including the selected member. If
you select "Children", the selected member is not included and only its
children are included.
The member is moved to the right and displays the option you selected in the
Selection Type column. For example, "Descendants" displays in the Selection
Type column.
Tip:
3-42
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
This procedure is not for writing back budget revisions prepared using the Budget Revisions
feature in ,Oracle Enterprise Performance Management Cloud which automatically updates
budget in both General Ledger and EPM type control budget in Budgetary Control through
other procedure.
The write back to Oracle General Ledger is also automatically performed for you when you
write back budget to Budgetary Control for EPM type control budget, but obviously only for
the portion of your enterprise-wide budget that you writes back to Budgetary Control.
For more information, see Using Financials for the Public Sector.
For Planning users, watch this tutorial video to learn about writing back EPM Cloud budgets
to the Oracle General Ledger:
Tutorial Video
For Planning Modules users, see the Tutorial Video.
3-43
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
If the target is the budget name, enter the value of the accounting scenario
that you plan to use.
2. Create a location.
The location is used to execute the transfer of budget amounts to the Oracle
General Ledger. The import format is assigned to the location. If you are using
multiple import formats, you also need to define multiple locations.
a. On the Setup tab, under Integration Setup, select Location.
b. Click Add.
c. In Name, enter a name for the location.
The location name is displayed when you initiate the transfer from the EPM
Cloud to the Oracle General Ledger.
The location name is displayed when you initiate the transfer from the EPM
application to the Oracle General Ledger.
d. In Import Format, select the name of the import format you to use during the
transfer.
Note:
The Source and Target field names are populated automatically
based on the import format.
3-44
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
The period mapping is used to convert periods to Oracle General Ledger accounting
calendar periods for the transfer.
Note:
When specifying the period, the starting and ending periods should be within a
single fiscal year. Providing date ranges that cross fiscal year results in
duplicate data.
3-45
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source GL Periods mapped to each Data
Management Period included in a Data Rule execution. Explicit period
mappings enable support of additional GL data sources where periods are
not defined by start and end dates.
i. Click Save.
5. Add Source Option filters to the data load rule for write-back.
a. On the Workflow tab, under Data Load, select Data Load Rule.
b. From the POV Bar, select the location to use for the data load rule.
Data load rules are processed within the context of a point of view. The default
point of view is selected automatically. The information for the point of view is
shown in the POV bar at the bottom of the screen.
c. Select the data load rule to which to add a filter.
d. Select the Source Options tab.
iii. To add special options for the member, click and select an option.
In the member options, "I" indicates inclusive. For example, "IChildren"
adds all children for the member, including the selected member, and
"IDescendants" adds all the descendants including the selected member.
If you select "Children", the selected member is not included and only its
children are included.
The member is moved to the right and displays the option you selected in
the Selection Type column. For example, "Descendants" displays in the
Selection Type column.
Tip:
3-46
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
The selected member is displayed in Essbase syntax in the Filter Condition field.
6. Execute the data load rule to write back.
a. On the Workflow tab, under Data Load, select Data Load Rule.
b. From the POV Bar, verify the location and period to use for the data load rule.
c. Select Execute to submit a request to transfer budget amounts to the Oracle General
Ledger.
d. In Import from Source, select to import the budget information from Planning.
e. In Recalculate, leave blank.
f. In Export to Target, select to export the information to the Oracle General Ledger.
g. In Start Period, select the earliest general ledger period to transfer.
The list of values includes all the general ledger periods that you have defined in the
period mapping. This is typically the first period of the year for the initial budget load,
and then the current period or a future period during the year if there are updates to
the budget that are to be transferred to the Oracle General Ledger.
h. In End Period, select the latest General Ledger period to transfer.
The list of values includes all the general ledger periods you have defined in the
period mapping.
i. In Import Mode, select Replace to overwrite existing budget information in Oracle
General Ledger for the period range you selected (from the start period and end
period options).
Select Append to add information to existing Oracle General Ledger budget amounts
without overwriting existing amounts.
j. Click Run.
Writing Back Actuals to the Oracle ERP Cloud - Oracle General Ledger
When actual information is complete in your Oracle Enterprise Performance Management
Cloud application, you can define the EPM Cloud application as a source and then write back
data to an Oracle ERP Cloud - Oracle General Ledger target application.
After specifying any necessary filters, you can then extract actual values from EPM Cloud
and write them to Oracle General Ledger. In the Export workflow step, the data is written to a
flat file, which in turn is copied to a file repository. When data is written back, journal entries
are created in the General Ledger.
On the Oracle ERP Cloud side when configuring the ERP system, make sure the Oracle
Fusion ERP Essbase cube has been created using the "Create General Ledger Balances
Cube." In addition, scenarios must be already be set up in the Oracle Fusion ERP Essbase
cube using the "Create Scenario Dimension Members" job.
To write back to the Oracle General Ledger:
1. An Oracle ERP Cloud/EPM Cloud integration requires that the you have the privileges or
user role and data access to work with all ERP ledgers to be integrated.
2. Create an import format to map dimensions to the Oracle General Ledger:
a. On the Setup tab, under Integration Setup, select Import Format.
b. Click Add.
3-47
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
3. Create a location.
The location stores the data load rules and mappings for the integration. The
import format is assigned to the location. If you are using multiple import formats,
you also need to define multiple locations.
a. On the Setup tab, under Integration Setup, select Location.
b. Click Add.
c. In Name, enter a name for the location.
The location name is displayed when you initiate the transfer from the EPM
Cloud to the Oracle General Ledger.
The location name is displayed when you initiate the transfer from the EPM
application to the Oracle General Ledger.
d. In Import Format, select the name of the import format you to use during the
transfer.
Note:
The Source and Target field names are populated automatically
based on the import format.
3-48
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
Note:
When specifying the period, the starting and ending periods should be within a
single fiscal year. Providing date ranges that cross fiscal year results in
duplicate data.
a. Click Add and add a separate row for each period that is to receive actual amounts.
Use the period names from the accounting calendar used by the ledger in the
general ledger.
b. Define a Period Key.
Once you select a value, information about the period key, prior period key, period
name, and the target period month are populated automatically.
• Target Period Month—The values in this field need to match the accounting
calendar for the ledger in the Oracle General Ledger, which receives the
transferred amounts.
• Target Period Year—Use values that corresponds to the accounting period (as
defined in the Target Period Month column).
See Defining Period Mappings.
6. On the Workflow tab, under Integration Setup, select Data Load Rule.
A data load rule is used to submit the process to transfer balances from the EPM Cloud
application to the Oracle General Ledger. The data load rule is created once but used
each time there is a transfer.
3-49
Chapter 3
Integrating Oracle ERP Cloud Oracle General Ledger Applications
7. From the POV Bar, select the location to use for the data load rule.
Data load rules are processed within the context of a point of view. The default
point of view is selected automatically. The information for the point of view is
shown in the POV bar at the bottom of the screen.
8. In Name, specify a name for the data load rule.
9. From Category, select Actual.
10. From Import Format, select the import format associated with the write-back.
a. In File Name, select the data file name that contains the data you are loading.
It may be the same one from which you created the data source application, or
another file that has data as well as the appropriate header.
When only the file name is provided, then data must be entered for a single
period on the Rules Execution window.
To load multiple periods, create a file for each period and append a period
name or period key to the file name. When you execute the rule for a range of
periods, the process constructs the file name for each period and uploads it to
the appropriate POV.
b. From Directory, specify the directory to which the file has been assigned.
To navigate to a file located in a Data Management directory, click Select, and
then choose a file on the Select screen. You can also select Upload on the
Select page, and navigate to a file on the Select a file to upload page.
If you do not specify a file name, thenData Management prompts you for the
file name when you execute the rule.
c. To load data into multiple periods, in the File Name Suffix Type drop-down,
select Period Name or Period Key.
A suffix is appended to the file name, andData Management adds the file
extension after adding the suffix. If you leave the file name blank, then Data
Management looks for a file with Suffix. When the file name suffix type is
provided, then the file name is optional in this case, and it is not required on
the Rule Execution window.
If the file name suffix type is a period key, the suffix indicator and period date
format are required (as the suffix set) in the file name and must be validated
as a valid date format. In this case, when you run the rule, enter 1_.txt in the
file name field and select "Period Name" for the suffix indicator. Then run the
rule for the January to March periods.
For example, specify:
i. 1_Jan-2019.txt
ii. 1_Feb-2019.txt
iii. 1_Mar-2019.txt
d. In Period Key Date Format, specify the data format of the period key that is
appended to the file name in JAVA date format. (SimpleDateFormat).
e. Click Save.
12. Click the Target Options tab.
3-50
Chapter 3
Integrating Budgetary Control
When working with data load rules, use target application options to specify options
specific to a location/data load rule (instead of the entire target application).
14. In Journal Source, enter a description of the journal source that matches the journal
source defined in the Oracle ERP Cloud.
15. In Journal Category, enter a description of the journal category that the matches the
journal category in the Oracle ERP Cloud.
16. Click Save.
a. On the Workflow tab, under Data Load, select Data Load Rule.
b. From the POV Bar, verify the location and period to use for the data load rule.
c. Select Execute to submit a request to write back actual amounts to the Oracle
General Ledger.
d. In Import from Source, select to import the actual value information from the EPM
Cloud application.
e. In Recalculate, leave blank.
f. In Export to Target, select to export the information to the Oracle General Ledger.
g. In Start Period, select the earliest general ledger period to transfer.
The list of values includes all the general ledger periods that you have defined in the
period mapping. This is typically the first period of the year for the initial actual load,
and then the current period or a future period during the year if there are updates to
the actual values that are to be written back to the Oracle General Ledger.
h. In End Period, select the latest General Ledger period to transfer.
The list of values includes all the general ledger periods you have defined in the
period mapping.
i. In Import Mode, select Replace to overwrite existing actual information in Oracle
General Ledger for the period range you selected (from the start period and end
period options).
Select Append to add information to existing Oracle General Ledger actual value
amounts without overwriting existing amounts.
j. Click Run.
3-51
Chapter 3
Integrating Budgetary Control
Note:
In this release, Budgetary Control only integrates with Planning and Planning
Budget Revisions modules.
Note:
Drill through is not supported in this release for the Budgetary Control
integration with the EPM Cloud.
3-52
Chapter 3
Integrating Budgetary Control
2. Create the EPM Cloud target application to represent the Planning and Budgetary
Control application to be integrated.
As a rule, when loading and writing back data between the Planning application and the
Budgetary Control application, you can use the system-generated target application that
represents the Planning application in the EPM Cloud and the target applications
generated from source system initialization that represents the Budgetary Control
balances Essbase cube, instead of creating your own. Do not change, add, or delete any
dimension details for these system-generated target applications on the Target
Application screen.
See Registering Target Applications.
3. Create an import format to map the dimensions from the Budgetary Control application to
those of the Planning application to which commitment, obligation and expenditure
amounts are loaded.
See Working with Import Formats in this section.
4. Create a location to associate the import format with the Budgetary Control.
See Defining Locations in this section.
5. Create category mapping for scenario dimension members in the Planning application to
which Budgetary Control balances are loaded.
See Defining Category Mappings in this section.
6. Create data load mappings to map the dimension members between the Budgetary
Control application and the Planning application.
See Data Load Mapping in this section.
7. Create a data load rule (integration definition) to execute the data load and transfer
commitment, obligation and expenditure amounts to the EPM Cloud application from the
Budgetary Control application.
See Adding Data Load Rules.
3-53
Chapter 3
Integrating Budgetary Control
Note:
Make sure the source system name does not include any spaces.
For example, use FinCloudBC rather than Fin Cloud BC.
3-54
Chapter 3
Integrating Budgetary Control
You must update this password anytime you change your Oracle ERP Cloud password.
7. In Web Services URL, enter the server information for the Fusion web service. For
example, enter https://server.
For customers using release 19.01 and earlier of the Oracle ERP Cloud, use the old
WSDL to make the connection and then specify the URL in the following format:
If you are use a release URL format version earlier than R12, then replace the "fs" with
fin in the URL from the one that is used to log on into the Web Services URL.
If you are use a release URL format version later than R12, replace the "fs" with " fa " in
the URL from the one that is used to log or simply copy and paste the server from the
one that is used to log on into Web Services URL.
For example, you might specify: https://efsdcdgha02.fin.efops.yourcorp.com/
publicFinancialCommonErpIntegration/ErpIntegrationService?WSDL
\
8. Click Test Connection.
9. Click Configure.
The confirmation "Source system [source system name] has been configured
successfully" is displayed.
10. On the Source System screen, click Initialize.
Initializing the source system fetches all metadata needed in Data Management, such as
budgets, budget chart of accounts, and so on. It is also necessary to initialize the source
system when there are new additions, such as chart of accounts, segments/chartfields,
ledgers, and responsibilities in the source system.
The initialize process may take a while, and you can watch the progress in the job
console.
11. Click Save.
3-55
Chapter 3
Integrating Budgetary Control
You work with import formats on the Import Format screen, which consists of three
sections:
• Import Format Summary—Displays common information relevant to the source
and target applications.
• Import Format Detail—Enables you to add and maintain import format information.
• Import Format Mappings—Enables you to add and maintain import format
mapping information.
To add an import format for a Budgetary Control based source:
1. On the Setup tab, under Integration Setup, select Import Format.
2. In the Import Format summary task bar, select Add.
In the upper grid of the Import Formats screen, a row is added.
3. In Name, enter a user-defined identifier for the import format.
You cannot modify the value in this field after a mapping has been created for this
import format.
4. In Description, enter a description of the import format.
5. In Source, select the Data Management source application from the drop down.
6. In Target, select the Budgetary Control application.
7. Go to the Import Format Mapping section.
The target dimensions are populated automatically.
8. From Source Column, From Source Column, specify the source dimensions in
the source Budgetary Control application that correspond to the dimensions in the
target Planning application.
Note:
For Planning dimensions that cannot be mapped from a Budgetary
Control dimension, such as the "Version" and "Plan Element" dimension
in a Planning application, leave them unmapped. You can specify a
single member for those unmapped Planning dimensions later in Data
Load Mappings.
Defining Locations
A location is the level at which a data load is executed in Data Management. Each
location is assigned an import format. Data load mapping and data load rules are
defined per location. You define locations to specify where to load the data.
3-56
Chapter 3
Integrating Budgetary Control
Additionally, locations enable you to use the same import format for more than one target
application where the dimensionality of the target applications is the same. However; if you
are using multiple import formats, you must define multiple locations.
Note:
You can create duplicate locations with the same source system and application
combination.
To create a location:
1. On the Setup tab, under Integration Setup, select Location.
2. In Location, click Add.
3. From Location Details, in Name, enter the location name.
4. From Import Format, enter the import format.
The import format describes the source system structure, and it is executed during the
source system import step. A corresponding import format must exist before it can be
used with a location.
Additionally:
• Source name is populated automatically based on the import format.
• Target name is populated automatically based on the import format.
Note:
If a location has a parent, the mappings are carried over to the child. However;
changes to mapping can only be performed on the parent location.
7. Optional: In Logic Account Group, specify the logic account group to assign to the
location.
A logic group contains one or more logic accounts that are generated after a source file is
loaded. Logic accounts are calculated accounts that are derived from the source data.
The list of values for a logic group is automatically filtered based on the Target
Application under which it was created.
8. Optional: In Check Entity Group, specify the check entity group to assign to the
location.
3-57
Chapter 3
Integrating Budgetary Control
When a check entities group is assigned to the location, the check report runs for
all entities that are defined in the group. If no check entities group is assigned to
the location, the check report runs for each entity that was loaded to the target
system. Data Management check reports retrieve values directly from the target
system, Data Management source data, or Data Management converted data.
The list of values for a check entity group is automatically filtered based on the
Target Application under which it was created.
9. Optional: In Check Rule Group, specify the check rule group to assign to the
location.
System administrators use check rules to enforce data integrity. A set of check
rules is created within a check rule group, and the check rule group is assigned to
a location. Then, after data is loaded to the target system, a check report is
generated.
The list of values for a check rule group is automatically filtered based on the
Target Application under which it was created.
10. Click Save.
• To edit an existing location, select the location to modify, and then make
changes as necessary. Then, click Save.
• To delete a location, click Delete.
When a location is deleted, the location is removed from all other Data
Management screens, such as Data Load.
Tip:
To filter by the location name, ensure that the filter row is displayed
above the column headers. (Click to toggle the filter row.) Then,
enter the text to filter.
You can filter Locations by target application using the drop down at the
top of the screen.
3-58
Chapter 3
Integrating Budgetary Control
4. In Category, enter a name that corresponds to the Planning and Planning Budget
Revisions applications Scenario dimension member to which you want to load budget
consumption amounts.
For example, if you want to load the sum of commitments, obligations, other anticipated
expenditures, and expenditures from Budgetary Control to one Scenario dimension
member in the Planning application, you need one Category mapping entry.
If you are using Budget Revisions feature, the system-generated Scenario dimension
member for this usage is OEP_Consumed.
If you are not using Budget Revisions and still want to load the Budgetary Control
balances, instead of the encumbrance balances from General Ledger balances, to the
Planning application, you can create a custom Scenario dimension member.
Either way, create a category mapping entry and enter this Scenario dimension member
name as the Target Category. Its corresponding Category can be named the same for
convenience or anything you like, such as Budgetary Control Consumption or just
Consumed.
5. Click Save.
2. From the POV Bar, select the location, period, and category corresponding to the EPM
Cloud to which to load data for the data load mapping and the data load rule.
Data load rules are processed within the context of a point of view. The default point of
view is selected automatically. The information for the point of view is shown in the POV
bar at the bottom of the screen.
3. From the Dimensions drop-down, select the source dimension to map.
You must provide mapping for each target Planning dimension.
3-59
Chapter 3
Integrating Budgetary Control
For dimensions that are not mapped in the Import Format, you must map to a
specific target member, such as "OEP_Working" in the unmapped "Version"
dimension and "OEP_Load" in the unmapped "Plan Element" dimension in the
Planning application.
For dimensions that are mapped in Import Format, even if there is no update to the
EPM Cloud dimensions value prior to the load, it is still necessary to create an "as
is" mapping.
4. Select the Like tab.
5. In Source Value, specify the source dimension member to map to the target
dimension member.
To map all Budgetary Control accounts to EPM Cloud "as is" without any
modification, in Source Value, enter *, and from Target Value, enter *.
6. In Target Value, select the member name to which the source members are
mapped.
You can also click the search to display the Member Selector and select a member
name from the member list.
7. In Rule Name, enter the name of the data load rule used to transfer budget
amounts to Budgetary Control.
Note:
Rules are evaluated in rule name order, alphabetically. Explicit rules
have no rule name. The hierarchy of evaluation is from Explicit to (In/
Between/Multi) to Like.
3-60
Chapter 3
Integrating Budgetary Control
Data load rules are processed within the context of a point of view. The default point of
view is selected automatically. The information for the point of view is shown in the POV
bar at the bottom of the screen.
3. Click Add.
4. In Name, enter the name of the data load rule.
5. In Description, enter a description to identify the data load rule when you launch the
transfer.
6. In Category, select the category that corresponds to the Scenario dimension member in
the Planning application to which the Budgetary Control consumption amounts are
loaded, the one you created in the Defining Category Mapping step above.
7. In Period Mapping Type, select the period mapping type for each data rule.
Valid options:
• Default—The Data Rule uses the Period Key and Prior Period Key defined in Data
Management to determine the Source General Ledger Periods mapped to each Data
Management period included in a Data Rule execution.
Select Default period mapping type for loading consumption amounts to the Planning
application.
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source GL Periods mapped to each Data
Management Period included in a Data Rule execution. Explicit period mappings
enable support of additional General Ledger data sources where periods are not
defined by start and end dates.
8. Click Save.
9. In Target Plan Type, select the plan type of the target to which you want to load budget.
10. Select Source Options to specify and dimensions and filters.
3-61
Chapter 3
Integrating Budgetary Control
16. To verify the results of the transfer, on the Workflow tab, under Data Load, select
Data Load Workbench.
• Click to display the Member Select screen and select a member using
the member selector. Then, click OK.
3-62
Chapter 3
Integrating Budgetary Control
The Member Selector dialog box is displayed. The member selector enables you to view
and select members within a dimension. Expand and collapse members within a
dimension using the [+] and [-].
The Selector dialog box has two panes—all members in the dimension on the left and
selections on the right. The left pane, showing all members available in the dimension,
displays the member name and a short description, if available. The right pane, showing
selections, displays the member name and the selection type.
You can use the V button above each pane to change the columns in the member
selector.
Note:
Assign filters for dimensions. If you do not specify an appropriate level of
members, even summary members are retrieved and results in an
overstatement.
click .
c. To add special options for the member, click and select an option.
In the member options, "I" indicates inclusive. For example, "IChildren" adds all
children for the member, including the selected member, and "IDescendants" adds all
the descendants including the selected member. If you select "Children", the selected
member is not included and only its children are included.
The member is moved to the right and displays the option you selected in the
Selection Type column. For example, "Descendants" displays in the Selection Type
column.
Tip:
3-63
Chapter 3
Integrating Budgetary Control
Use this procedure to write back original and revised budget prepared using
thePlanning feature to Oracle General Ledger.
This procedure is not for writing back budget revisions prepared using the Budget
Revisions feature in the Oracle Enterprise Performance Management Cloud which
automatically updates budget in both General Ledger and EPM type control budget in
Budgetary Control through other procedure. For more information, seeSetting Up
Budget Revisions and Integration with Budgetary Control.
This procedure synchronizes the budget written back to an EPM type control budget in
Budgetary Control with the budget in Oracle General Ledger, making it possible to skip
the Writing Back Budgets to the Oracle ERP Cloud in Administering Data Management
for Oracle Enterprise Performance Management Cloudprocedure for the portion of
your enterprise-wide budget that you write back to Budgetary Control.
For more information, see Using Financials for the Public Sector
For Planning users, watch this tutorial video to learn about writing back EPM Cloud
budgets to the Oracle General Ledger:
Tutorial Video
For Planning Modules users, see the Tutorial Video.
.
At a high level, here are the steps for writing back EPM Cloud budgets to the
Budgetary Control::
1. Register, configure, and initialize a source connection to the Budgetary Control.
See Configuring a Connection to a Budgetary Control Source.
Note:
If you have already registered a source system to connect to the
Budgetary Control application in Loading Budgetary Control to the EPM
Cloud Process Description topic (see Loading Budgetary Control Budget
Consumption Balances to the EPM Cloud Process Description), you
must reuse the same source system.
Note:
Drill through is not supported in this release.
2. Select the Budgetary Control target application to which to write back budgets
from the EPM Cloud source system.
A Budgetary Control application is downloaded with the target application type
Essbase.
As a rule, when writing back to a Budgetary Control application, do not change,
add, or delete any dimension details on the Target Application screen.
See Registering Target Applications.
3-64
Chapter 3
Integrating Budgetary Control
3. Map the dimensions between the Planning application and the Budgetary Control target
application by building an import format.
See Working with Import Formats.
4. Create a location to associate the import format for an EPM Cloud application with a
Budgetary Control application.
See Defining Locations.
5. Create category mapping for scenario dimension members in the Planning application
and from which the budget is written back to the Budgetary Control.
See Defining Category Mappings.
6. Create data load mappings to map dimensions between the Planning application and
Budgetary Control.
See Data Load Mapping.
7. Create a data load rule (integration definition) to to map dimension members between the
Planning application and Budgetary Control.
See Adding Data Load Rules.
8. View the EPM Cloud budgets loaded to Budgetary Control.
See Viewing the EPM Cloud Budgets Loaded to Budgetary Control.
9. Optionally, you can write out budget data from the Planning to a flat file using a custom
target application. This output file may be used to load data to any other application.
See Creating a Custom Target Application.
3-65
Chapter 3
Integrating Budgetary Control
5. In Source, select the EPM Cloud source application from the drop down.
6. In Target, select the Budgetary Control application.
7. Select the Import Format Mapping section.
The target dimensions are automatically populated.
8. From Source Column, specify dimensions in the source Planning that correspond
to the dimensions in the target Budgetary Controlapplication.
The Source Column drop-down displays all EPM Cloud source system segments
available for the Planning application.
Note:
Arbitrarily map the "Control Budget" Budgetary Control dimension to the
"Account" Planning dimension. Without a mapping to the Control Budget,
the import process fails.
Defining Locations
A location is the level at which a data load is executed in Data Management. Each
location is assigned an import format. Data load mapping and data load rules are
defined per location. You define locations to specify where to load the data.
Additionally, locations enable you to use the same import format for more than one
target application where the dimensionality of the target applications is the same.
However; if you are using multiple import formats, you must define multiple locations.
Note:
You can create duplicate locations with the same source system and
application combination.
To create a location:
1. On the Setup tab, under Integration Setup, select Location.
2. In Location, click Add.
3. From Location Details, in Name, enter the location name.
4. From Import Format, enter the import format.
3-66
Chapter 3
Integrating Budgetary Control
The import format describes the source system structure, and it is executed during the
source system import step. A corresponding import format must exist before it can be
used with a location.
Additionally:
• Source name is populated automatically based on the import format.
• Target name is populated automatically based on the import format.
Note:
You must specify the budget currency of the control budget to which budget is
written back.
Note:
If a location has a parent, the mappings are carried over to the child. However;
changes to mapping can only be performed on the parent location.
7. Optional: In Logic Account Group, specify the logic account group to assign to the
location.
A logic group contains one or more logic accounts that are generated after a source file is
loaded. Logic accounts are calculated accounts that are derived from the source data.
The list of values for a logic group is automatically filtered based on the Target
Application under which it was created.
8. Optional: In Check Entity Group, specify the check entity group to assign to the
location.
When a check entities group is assigned to the location, the check report runs for all
entities that are defined in the group. If no check entities group is assigned to the
location, the check report runs for each entity that was loaded to the target system. Data
Management check reports retrieve values directly from the target system, Data
Management source data, or Data Management converted data.
The list of values for a check entity group is automatically filtered based on the Target
Application under which it was created.
9. Optional: In Check Rule Group, specify the check rule group to assign to the location.
System administrators use check rules to enforce data integrity. A set of check rules is
created within a check rule group, and the check rule group is assigned to a location.
Then, after data is loaded to the target system, a check report is generated.
3-67
Chapter 3
Integrating Budgetary Control
The list of values for a check rule group is automatically filtered based on the
Target Application under which it was created.
10. Click Save.
Tip:
To filter by the location name, ensure that the filter row is displayed
above the column headers. (Click to toggle the filter row.) Then,
enter the text to filter.
You can filter Locations by target application using the drop down at the
top of the screen.
3-68
Chapter 3
Integrating Budgetary Control
2. From the POV bar, select the location, period, and category corresponding to the EPM
Cloud Scenario from which the budget is written back.
3. Select the Like tab.
4. In Source Value, specify the source dimension member to map to the target dimension
member.
To map all Budgetary Control accounts to EPM Cloud "as is" without any modification, in
Source Value, enter *, and from Target Value, enter *.
5. In Target Value, select the control budget name in Budgetary Control to which the budget
is loaded.
You can also click the search to display the Member Selector and select the control
budget name from the member list.
6. In Rule Name, enter the name of the data load rule used to transfer budget amounts to
Budgetary Control.
Note:
Rules are evaluated in rule name order, alphabetically. Explicit rules have no
rule name. The hierarchy of evaluation is from Explicit to (In/Between/Multi) to
Like.
3-69
Chapter 3
Integrating Budgetary Control
3-70
Chapter 3
Integrating Budgetary Control
Select EPM Financials module versus Hyperion Planning based on the source budget
type classification of the control budget.
12. Execute the data load rule.
Note:
When selecting the Execute Rule submission parameters, always choose
Replace as the Import Mode for writing back to the Budgetary Control.
• View data load rules before executing them—See Using the Data Load Workbench.
• Check the data rule process details—See Viewing Process Details.
13. On the Workflow tab, under Monitor, select Process Details.
3-71
Chapter 3
Integrating Budgetary Control
• Click to display the Member Select screen and select a member using
the member selector. Then, click OK.
The Member Selector dialog box is displayed. The member selector enables you
to view and select members within a dimension. Expand and collapse members
within a dimension using the [+] and [-].
The Selector dialog box has two panes—all members in the dimension on the left
and selections on the right. The left pane, showing all members available in the
dimension, displays the member name and a short description, if available. The
right pane, showing selections, displays the member name and the selection type.
You can use the V button above each pane to change the columns in the member
selector.
Note:
Assign filters for dimensions. If you do not specify an appropriate level of
members, even summary members are retrieved, and this results in an
overstatement.
and click .
c. To add special options for the member, click and select an option.
In the member options, "I" indicates inclusive. For example, "IChildren" adds
all children for the member, including the selected member, and
"IDescendants" adds all the descendants including the selected member. If
3-72
Chapter 3
Integrating Budgetary Control
you select "Children", the selected member is not included and only its children are
included.
The member is moved to the right and displays the option you selected in the
Selection Type column. For example, "Descendants" displays in the Selection Type
column.
Tip:
3-73
Chapter 3
Integrating Budgetary Control
6. On the Review Budgetary Control Balances page, select the Control Budget,
and any search parameters for the budget that you want to review.
7. Click Search.
The results of the search are shown on a results page.
8. Optional: If you write back to a control budget with a source budget type classified
as an "EPM Financials module,", then the system synchronizes the loaded budget
to Oracle General Ledger for you, without having to perform the steps in the
Writing Back Budgets to the Oracle ERP Cloud topic. You can verify the
updated budget in Oracle General Ledger by completing the following:
9. From the Oracle ERP Cloud, from General Accounting, select Period Close.
3-74
Chapter 3
Integrating Budgetary Control
12. On the Inquire on Detail Balances page, select a data access set context if not already
done and specify the search parameters for the budget that you want to review.
Currently the Scenario for the budget auto-synchronized by the EPM Financials module
type control budget goes by the same name as that of its source budget name.
3-75
Chapter 3
Integrating Oracle NetSuite
Tutorial Video
Note:
After specifying an Oracle NetSuite source system and connection
information in Data Management, you must initialize the source system to
create a Target Application definition for each NSPB Sync SuiteApp Saved
Search. Metadata saved searches includes "Metadata" in the saved search
name, and Data saved searches includes "Data" in the saved search name.
3-76
Chapter 3
Integrating Oracle NetSuite
3-77
Chapter 3
Integrating Oracle NetSuite
Note:
Data generated from the NSPB Sync SuiteApp Saved Search is used for
importing data only, and not for write-back.
At a high level, these are the steps for loading data from an Oracle NetSuite data
source:
1. An administrator installs NSPB Sync SuiteApp Saved Searches, which is a shared
bundle. Before you can install the bundle, it must be shared with your account.
2. Perform the following tasks. (See the topics in the Oracle NetSuite Planning and
Budgeting Cloud Service Sync guide for information on performing these tasks.
Access to the guide requires a NetSuite login.)
• You are required to have an Oracle NetSuite login to access NSPB Sync
SuiteApp.
For information on setting up the login, see the Oracle NetSuite Planning and
Budgeting Cloud Service Sync guide.
• Enable the required features in your Oracle NetSuite account. See "Required
Features for Installing the NSPB Sync SuiteApp."
• Install the SuiteApp. See "Installing the NSPB Sync SuiteApp."
• Set the file encryption password. See "Setting Up a Password for File
Encryption."
• Create user records for EPM Cloud users. These user records must have an
EPM Cloud Integration role. See "Creating a EPM Cloud User Record."
• Set up token-based authentication for EPM Cloud users. See "Setting Up
Token-Based Authentication."
3-78
Chapter 3
Integrating Oracle NetSuite
• Set up single sign-on (SSO). NSPB Sync SuiteApp Saved Searches supports single
sign-on (SSO) through any SSO service that offers SAML 2.0. With an SSO account,
users can navigate between NetSuite and Planning without entering their credentials
each time. This enables users to navigate to the Create user records for EPM Cloud
users. See "Setting Up Menu Navigation to Planning."
3. In Data Management, register the source system with integration user credentials.
This step includes specifying the connection details and the drill URL.
See Configuring a Source Connection to Oracle NetSuite.
4. Run the initialize process to import the definition of all saved search owned by the user.
When you initialize the source system, Data Management imports all saved search
definitions owned by the user. If you don't want to import all saved search definitions, you
can go to target application and select individual saved search definitions one by one. If
you have initialized the source system the first time, add incremental saved search
definitions also in the target application.
For more information, see Creating an Oracle NetSuite Data Source.
5. Define the import format to map columns from the saved search to dimensions in the
EPM Cloud application.
For more information, see Adding Import Formats for Oracle NetSuite Data Sources.
6. Once the initialization process is complete, you can pick a NSPB Sync SuiteApp Saved
Search when adding a target application. When you select Oracle NetSuite as a data
source, then you are presented with a list of the saved searches from the selected Oracle
NetSuite source.
You can also provide source filter criteria on the application filters tab. These source
filters are the same as Oracle NetSuite "Criteria", which filter the data from the NSPB
Sync SuiteApp Saved Searches.
7. Define source mapping entries in the calendar mapping section to map the Oracle
NetSuite periods to the EPM Cloud periods.
Define any period mapping. Available options are explicit or default period mappings:
For more information on periods mappings available for an Oracle NetSuite integration,
see Managing Periods in Oracle NetSuite.
8. In the Import Format, specify the NSPB Sync SuiteApp data source as the source
application and your Planning application as the target application.
For more information, see Adding Import Formats for Oracle NetSuite Data Sources.
9. Define a location to indicate where to load the data.
Each location includes an import format, data load rules, and data load mappings.
For more information, see Defining Locations.
10. Create data load mappings.
This pulls the data from the Oracle NetSuite instance into Data Management, maps the
data and then shows the results in the workbench. If the mapping succeeds without
errors, the data is loaded to the target application.
For more information, see Adding Data Load Rules for an Oracle NetSuite Data Source.
3-79
Chapter 3
Integrating Oracle NetSuite
For more information about applying filter criteria, see Applying Oracle NetSuite
Application Filters.
For more information about executing a rule, see Running Data Load Rules.
Note:
The NSPB Sync SuiteApp Saved Search policy for accessing integrations
has changed. It now requires token based authorization instead of basic
authentication in order to set up the connection to NSPB Sync SuiteApp
Saved Searches from the EPM Cloud. Basic authorization credentials will be
made read only in Release 21.06.
3. From the Access Token page, select Internal NS Application NS-PBCS as the
application name.
3-80
Chapter 3
Integrating Oracle NetSuite
4. Click Save and copy the Token ID and Token Secret from this page.
Note:
This is the only time you can view these values. If you navigate away from this
page, you cannot get access to these values.
5. From the NSPB Sync SuiteApp home page, under Integration, select Data
Management.
3-81
Chapter 3
Integrating Oracle NetSuite
3-82
Chapter 3
Integrating Oracle NetSuite
3-83
Chapter 3
Integrating Oracle NetSuite
3-84
Chapter 3
Integrating Oracle NetSuite
The initialize process is used to extract the NSPB Sync SuiteApp Saved Searches
metadata information.
The initialize process may take a long time to complete. You can watch the progress on
the job console.
13. Click Save.
Note:
You can click Refresh on the Target Application screen to refresh any saved
searches that have been created in Oracle NetSuite after you have initialized the
source system in Data Management.
Note:
When you create an Oracle NetSuite data source, dimension details are populated
automatically and mapped directly to the target dimension class "Generic." As a rule
when loading data from an Oracle NetSuite data source, do not change, add, or
delete any dimension details on the Target Application screen.
3-85
Chapter 3
Integrating Oracle NetSuite
You can also launch the Search and Select screen by clicking and selecting a
source entity.
3-86
Chapter 3
Integrating Oracle NetSuite
Management joins the names to form the name TestPBCS Quarter Summary Balance.
6. Click OK.
7. Click Save.
3-87
Chapter 3
Integrating Oracle NetSuite
2. In the Import Format summary task bar, select the import format associated with
the drill through to which to add an additional filter.
3. In the Import Format Mapping section, map the source dimension associated
with the additional filter to an Attribute (User defined attribute - used as needed for
mapping or drill through) column.
To do this, click Add, select Attribute and then map the Source Column to the
Attribute.
For example you could map a Subsidiary ID source column to the Attribute 4 row.
4. In Drill Through URL, click and enter the search type criteria used for the
drill-through for addition filter.
For example, if you want to add a Subsidiary ID as an additional filter, enter
&Transaction_SUBSIDIARY=$ATTR4$ to the list of parameters.
In this case, you would specify the entire drill through URL definition as:
Searchtype=Transaction&searchid=customsearch_nspbcs_trial_balance&Tran
saction_ACCOUNT=$ATTR1$&Transaction_POSTINGPERIOD=$ATTR2$&Transaction_
SUBSIDIARY=$ATTR4$&Transaction_POSTING=T&
For more information, see Defining Drill Through Parameters to Oracle NetSuite.
Note:
When specifying the Drill Through URL detail components here, you
must also set up the server component for the drill through in the source
system. See also Configuring a Source Connection to Oracle NetSuite.
5. Click Save.
When the drill through is first selected from EPM Cloud to Data Management, no
extra records are included:
The second drill to Oracle NetSuite also does not include any additional records.
3-88
Chapter 3
Integrating Oracle NetSuite
3-89
Chapter 3
Integrating Oracle NetSuite
3-90
Chapter 3
Integrating Oracle NetSuite
Data Management provides a set of powerful import expressions that enable it to read
and parse virtually any file into the Data Management database. You enter advanced
expressions in the Expression column of the field. Import expressions operate on the
value that is read from the import file.
For more information, see Adding Import Expressions.
For information on adding an import format for a data driver, see Adding an Import
Expression for a Data Driver.
11. Click Save.
Note:
You can add additional (non-dimensional/non-EPM Cloud) columns in the result set
of the saved search such as a memo, transaction date, document #, or transaction
details. To do this, , set up the non-dimensional columns in the NSPB Sync
SuiteApp Saved Search and then map the columns to Lookup or Attribute columns
in the Import Format option in Data Management.
For more information about lookup dimensions, see Adding Lookup Dimensions.
Data load rules are defined for locations that you have already set up. You can create
multiple data load rules for a target application so that you can import data from multiple
sources into the target application.
3-91
Chapter 3
Integrating Oracle NetSuite
The data load rule is created once, but used each time there is a transfer.
To create a data load rule:
1. On the Workflow tab, under Data Load, select Data Load Rule.
2. From the POV Bar, select the location to use for the data load rule.
Data load rules are processed within the context of a point of view. The default
point of view is selected automatically. The information for the point of view is
shown in the POV bar at the bottom of the screen.
3. Click Add.
4. In Name, enter the name of the data load rule.
5. In Description, enter a description to identify the data load rule when you launch
the transfer.
6. In Category, leave the default category value.
The categories listed are those that you created in the Data Management setup.
See Defining Category Mappings.
7. In Period Mapping Type, select the period mapping type for each data rule.
Valid options:
• Default—The Data Rule uses the Period Key and Prior Period Key defined in
Data Management to determine the Source General Ledger Periods mapped
to each Data Management period included in a Data Rule execution.
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source GL Periods mapped to each Data
Management Period included in a Data Rule execution. Explicit period
mappings enable support of additional GL data sources where periods are not
defined by start and end dates.
8. In Import Format, specify the import format based on the file format of Saved
Search application (for example, single column numeric, and multi-column data)
that you want to load to the target application.
9. In Calendar, select the source system calendar.
10. In Target Plan Type, select the plan type of the target system to which you want to
load budget.
11. Optional: Select the Source Filters tab to apply filter conditions to the source
Oracle NetSuite Saved Search application.
See Applying Oracle NetSuite Application Filters.
See Defining Application Options for Essbase.
3-92
Chapter 3
Integrating Oracle NetSuite
13. Optional: Select Custom Options to specify any free form integration information.
On the detail side (where you specify the search type components), the drill URL to Oracle
NetSuite requires the following parameters:
• "search type"
• "search ID"
• Optionally, you can specify additional parameters to filter the drill based on the Account
and Period.
Search Type
The drill through parameter list includes the search type of "Transaction." It is specified in the
drill through URL as:
3-93
Chapter 3
Integrating Oracle NetSuite
Searchtype=Transaction&searchid=customsearch_nspbcs_trial_balance&Transact
ion_ACCOUNT=$ATTR1$&Transaction_POSTINGPERIOD=$ATTR2$&Transaction_DEPARTME
NT=$ATTR5$&Transaction_CLASS=$ATTR4$&Transaction_INTERNALID=$ATTR3$&Transa
ction_POSTING=T&
Search ID
The drill through list also includes "Search ID." Specify the parameter by using Search
StringID.
You can find the value from the Search Definition in Oracle NetSuite.
https://<NetSuite Domain>/app/common/search/searchresults.nl?
searchtype=Transaction&searchid=customsearch_nspbcs_all_transactions_det.
Additional Parameters
You can specify additional parameters to filter the drill based on account and period.
Below are some commonly used parameters:
3-94
Chapter 3
Integrating Oracle NetSuite
they are missing in the Results section of your Saved Search in Oracle NetSuite as shown
below:
For more information about Internal IDs, see the NetSuite Help Center.
3-95
Chapter 3
Integrating with the Oracle HCM Cloud
In this example, the drill URL format for the Saved Search is:
searchtype=Transaction&searchid=<NAME OF SAVED
SEARCH>&Transaction_TYPE&detailname=$<ATTR COLUMN
FOR TRANSACTION
ID>$&Transaction_POSTINGPERIOD=$=$<ATTR COLUMN
FOR PERIOD
ID>$&Transaction_POSTING=T&Transaction_MAINLINE=F&
Based on the example in step 8, you specify the drill URL as:
searchtype=Transaction&searchid=customsearch_nspbcs_all_transactions_s
um&Transaction_TYPE&detailname=$ATTR3$&Transaction_ACCOUNT=$ATTR1$&Tra
nsaction_POSTINGPERIOD=$ATTR2$&Transaction_POSTING=T&Transaction_MAINL
INE=F&
For more information on search types, see Saved Search Requirements in the Drill
Through.
11. Click OK and then click Save.
Tutorial Video
3-96
Chapter 3
Integrating with the Oracle HCM Cloud
Note:
Drill through and write-back are not supported in Oracle HCM Cloud.
Since there are multiple data rules involved to integrate various data from the Oracle HCM
Cloud, batches are defined to import the series of data rules.
At a high level, the steps for loading data from an Oracle HCM Cloud extract data source are:
3-97
Chapter 3
Integrating with the Oracle HCM Cloud
1. Make sure that you have been assigned a Human Capital Management Integration
Specialist job role.
A Human Capital Management Integration Specialist job role is required to
manage Human Capital Management extracts. The Human Capital Management
Integration Specialist (Job Role) is the individual responsible for planning,
coordinating, and supervising all activities related to the integration of human
capital management information systems.
For more information, see Human Capital Management Integration Specialist (Job
Role).
2. In Data Integration, then from Application option, select the application
corresponding to the Workforce Planning application, and then on the Dimension
Detail tab, assign classifications for the seeded dimensions in Planning Modules.
Classifications for the seeded dimensions include "Employee," "Job", "Property,"
and "Union" dimensions.
3. In the Source System option, select Oracle HCM Cloud as a source system.
Note:
You must import EPBCS Assignment_<Release>.xdoz into the /
Custom folder of BI Publisher and not Oracle HCM Cloud.
Note:
If you require non-English characters, download the EPBCS HCM
Extract.zip file and then unzip the zip. Next go to BI Publisher
Document Repository and import the EPBCS Assignment.xdoz file.
3-98
Chapter 3
Integrating with the Oracle HCM Cloud
Note:
In all cases, the EPBCS Initialize.xml must always be imported in Oracle
HCM Cloud.
.
Note:
All extracts must be imported without the Legislative Group. That is, the
Legislative Group must be blank.
3-99
Chapter 3
Integrating with the Oracle HCM Cloud
If your integration with Oracle HCM Cloud is a single occurrence using only one
import format, location and data load rule, you can leave the prefix name blank.
10. In Target Application, each Oracle HCM Cloud extract imported is registered
automatically as a target data source application.
You can register an individual Oracle HCM Cloud extract as a data source entity
by selecting a source entity (or individual Oracle HCM Cloud extract) from which to
create the application definition. For more information, see Creating an Oracle
HCM Cloud Data Source Application.
For information on using an updated version of an extract, see Updating Existing
Oracle HCM Cloud Extracts.
11. If necessary, modify any dimension details.
All columns from the Oracle HCM Cloud extract are mapped to the EPM target
dimensions class with the type "Generic."
Note:
As a rule, when loading data from an Oracle HCM Cloud data source, do
not change, add, or delete any dimension details on the Target
Application screen.
12. Any application filters associated with the data source are predefined during the
initialization.
13. The Import Format for each Oracle HCM Cloud extract is predefined during the
initialization step.
3-100
Chapter 3
Integrating with the Oracle HCM Cloud
For information on adding or modifying import formats, see Adding Import Formats.
14. The Location, including the import format, data load rules, and the data load mappings,
are predefined during the initialization steps.
Additionally, Oracle HCM Cloud extracts support the transformation of actual data
imported from Oracle HCM Cloud in the Data dimension column.
For example, in the Oracle HCM Cloud, the employee type might be "F" (for full time
employee type), or "T" (for temporary employee) while in Planning Modules, the same
designations are shown as "FULLTIME," or "TEMP."
For information on modifying data load mappings, see Creating Member Mappings.
16. Data load rules are predefined during the initialization for the target application.
3-101
Chapter 3
Integrating with the Oracle HCM Cloud
For information on modifying a data load rule, see Defining Data Load Rules to
Extract Data.
Any source filters associated with the data source are created automatically during
the integration. You can select any specific criteria on the Source Filters tab to filter
the results that are loaded.
17. The batch definition is predefined during the initialization step.
Data Management populates the batch definition detail, parameters, and jobs for
the data load rules.
If you need to modify a batch definition, see Working with Batch Definitions.
3-102
Chapter 3
Integrating with the Oracle HCM Cloud
Note:
These steps assume that you have configured the source system, defined the
source connection, and downloaded the EPBCS HCM Extract.zip. For information
about any of these processes, see Process Description for Integrating Data from
Oracle HCM Cloud.
3-103
Chapter 3
Integrating with the Oracle HCM Cloud
The confirmation "Source system [Source system name] has been configured
successfully" is displayed.
11. On the Source System screen, click Initialize.
The initialize process is used to configure the integration for importing data from
the Oracle HCM Cloud.
One data source application is defined for each Oracle HCM Cloud data extract.
The Oracle HCM Cloud extract definitions are exported and shipped out of the box
as XML files.
The initialize process may take a while, and you can watch the progress in the job
console.
12. When prompted, specify a prefix for the predefined content of the Oracle HCM
Cloud integration.
You are prompted to specify a prefix for the predefined content of the Oracle HCM
Cloud integration.
Specify a prefix name if you plan to have multiple Oracle HCM Cloud integrations,
or you plan to use multiple import formats, locations, or data load rule for the
integration with Oracle HCM Cloud. The prefix name is used as a unique identifier
for each integration.
If your integration with Oracle HCM Cloud is a single occurrence using only one
import format, location and data load rule, leave the prefix name blank.
3-104
Chapter 3
Integrating with the Oracle HCM Cloud
4. From the Task menu, HCM Extracts, select Manage Extract Definitions.
3-105
Chapter 3
Integrating with the Oracle HCM Cloud
6. Click Import Extract to import the pre-defined Oracle HCM Cloud extract
definition XML files.
When importing the extract definitions, the extract name must be same as the first
name of the file name. For example, when importing "PBCS Assignment
Data_1902.xml," the extract name must be specified as "EPBCS Assignment
Data_1902."
7. Import all the pre-defined Oracle HCM Cloud extract definitions:
• EPBCS Account Merit Metadata—EPBCS Account Merit
Metadata_<Release>.xml
3-106
Chapter 3
Integrating with the Oracle HCM Cloud
3. From the Catalog screen, then under Shared Folders, select Custom.
3-107
Chapter 3
Integrating with the Oracle HCM Cloud
3-108
Chapter 3
Integrating with the Oracle HCM Cloud
3-109
Chapter 3
Integrating with the Oracle HCM Cloud
Note:
This process enables you to download metadata using Data Management.
4. From the Task menu, HCM Extracts, select Manage Extract Definitions.
3-110
Chapter 3
Integrating with the Oracle HCM Cloud
5. Click the eyeglass icon for the extract definition to view the extract run details and
status.
The eyeglasses are on the far right for Assignment Data.
6. Click the Extract Run that was submitted.
Verify that the Status column shows a green check mark and the word "Succeeded."
7. From the Fusion Navigator menu, and then from Payroll, select Checklist.
8. Select the extract definition that was submitted for the Flow Pattern.
The process flow name displays after clicking Search.
9. Click the process flow name link to see the task details.
10. Click on the Go to Task arrow pointing to the right icon for the process.
3-111
Chapter 3
Integrating with the Oracle HCM Cloud
11. From the Actions menu, select View Results for the extract output.
12. Download each of the metadata files and load them into the Oracle Hyperion
Workforce Planning application.
For metadata with hierarchy structures, submit the extract, download and load
them into the Workforce Planning application.
3-112
Chapter 3
Integrating with the Oracle HCM Cloud
You can also launch the Search and Select screen by clicking and selecting a
source entity.
5. Click OK.
6. Optional: You can apply filter conditions to the Oracle HCM Cloud data source so that
only those records that meet selected conditions are returned to Data Management. You
can specify a single filter condition or multiple filters conditions, and additionally specify
the exact values that you want returned.
To apply a filter condition:
3-113
Chapter 3
Integrating with the Oracle HCM Cloud
Note:
When you create an Oracle HCM Cloud data source, dimension details
are populated automatically and mapped directly to the target dimension
class "Generic." As a rule, when loading data from an Oracle HCM Cloud
data source, do not change, add, or delete any dimension details on the
Target Application screen.
Editing Application Filters for the Oracle HCM Cloud Data Source Application
System administrators can add and edit the application filters that are associated with
the Oracle Human Capital Management Cloud if they customize the Oracle HCM
Cloud extract definition.
By default, these application filters are defined explicitly for the Oracle HCM Cloud
data source. It is recommended that you do not modify or change the filter definitions if
you use the predefined integration with Oracle HCM Cloud.
To edit Oracle HCM Cloud application filters:
1. Select the Setup tab, and then under Register, select Target Application.
2. Select the Oracle HCM Cloud data source to which to apply any filters.
3. From Application Details, select the Application Filters tab.
4. Select the name of the field to which to apply the filter condition.
5. Click Edit.
6. From Edit Application Filters screen, select the name of the value to which to
change the filter condition.
3-114
Chapter 3
Integrating with the Oracle HCM Cloud
3-115
Chapter 3
Integrating with the Oracle HCM Cloud
Adding Data Load Rules for an Oracle HCM Cloud Data Source
Application
Data load rules are predefined during the initialization for the target application.
However, if you have added new Oracle Human Capital Management Cloud extracts
in the target application, or modified the import format, location, or data load
mappings, you can define and execute new data load rules to load the results of the
Oracle HCM Cloud extracts to your Oracle Hyperion Workforce Planning application.
You can specify filter values so that only those records that meet selected condition
are returned.
Data load rules are defined for locations that have been already set up. You can create
multiple data load rules for a target application so that you can import data from
multiple sources into a target application.
The data load rule is created once, but used each time there is a transfer.
To create a data load rule:
1. On the Workflow tab, under Data Load, select Data Load Rule.
2. From the POV Bar, select the location to use for the data load rule.
Data load rules are processed within the context of a point of view. The default
point of view is selected automatically. The information for the point of view is
shown in the POV bar at the bottom of the screen.
3. Click Add.
4. In Name, enter the name of the data load rule.
5. In Description, enter a description to identify the data load rule when you launch
the transfer.
6. In Category, leave the default category value.
The categories listed are those that you created in the Data Management setup.
See Defining Category Mappings.
7. In Period Mapping Type, select the period mapping type for each data rule.
Valid options:
• Default—The Data Rule uses the Period Key and Prior Period Key defined in
Data Management to determine the Source General Ledger Periods mapped
to each Data Management period included in a Data Rule execution.
• Explicit—The Data Rule uses the Explicit period mappings defined in Data
Management to determine the source General Ledger Periods mapped to
each Data Management Period included in a Data Rule execution. Explicit
period mappings enable support of additional General Ledger data sources
where periods are not defined by start and end dates.
8. In Import Format, specify the import format based on the file format of the Saved
Search application (for example, single column numeric, and multi-column data)
that you want to load to the target application.
9. In Calendar, select the source system calendar.
3-116
Chapter 3
Integrating with the Oracle HCM Cloud
10. In Target Plan Type, select the plan type of the target system to which you want to load
budget.
11. Optional: Select the Source Filters tab to apply filter conditions to the source Oracle
HCM Cloud data source application.
For example, you can filter legislative data groups to a specific group. For more
information, see Creating an Oracle HCM Cloud Data Source Application.
12. Optional: Select Target Options to specify any target options.
13. Optional: Select Custom Options to specify any free form integration information.
3-117
Chapter 3
Integrating with the Oracle HCM Cloud
During the initialization of the Oracle HCM Cloud source system, Data Management
creates an application for each metadata source. You can map each application to
your metadata application and then execute the load. Note that the system does not
create mappings automatically.
Note:
For detailed information on the Oracle HCM Cloud fields belonging to each
predefined extract definition, see Oracle HCM CloudExtract Definition Field
Reference.
3-118
Chapter 3
Integrating with the Oracle HCM Cloud
You cannot modify the value in this field after a mapping has been created for this import
format.
15. In Description, enter a description of the import format.
16. In Source, select Oracle HCM Cloud metadata application for the source system.
17. In Target, select the target system.
20. In Drill URL, enter the URL used for the drill-through.
21. In the Mapping section, map the source columns to the dimensions in the target
application.
For information on adding or modifying import formats, see Adding Import Formats.
For information on adding import expressions, see Adding Import Expressions.
22. Click Save.
Each location includes an import format, data load rules, and data load mappings.
For more information, see Defining Locations.
25. Click Save.
26. On the Workflow tab, under Data Load, select Data Load Mapping.
For more information about creating a data load rule, see Defining Data Load Rules to
Extract Data.
30. Apply any source filters.
3-119
Chapter 3
Loading Data from the Oracle ERP Cloud
Any source filters associated with the data source are created automatically during
the integration. You can select any specific criteria on the Source Filters tab to filter
the results that are loaded.
Depending on the Oracle HCM Cloud metadata category, the following source
filters apply:
• Effective Date—Select the date on which you want the trees to be effective.
• Legislative Data Group—Legislative data groups are a means of partitioning
payroll and related data. At least one legislative data group is required for
each country where the enterprise operates. Each legislative data group is
associated with one or more payroll statutory units.
• Tree Code— Tree code for hierarchy in Oracle HCM Cloud (for objects with
hierarchy, for example: Org, Position)
• Tree Version—Tree Version for hierarchy in Oracle HCM Cloud
• Changes Only—Controls the extract mode. Valid options are N or Y.
The following table describes the different extract modes, their lookup values
and descriptions:
This step pushes the data from the metadata application into Data Management,
maps the data and then shows the results in the workbench. If mapping succeeds
without errors, the data is loaded to the target application.
For more information about executing a rule, see Running Data Load Rules.
3-120
Chapter 3
Loading Data from the Oracle ERP Cloud
downloaded file into Data Integration, the data and metadata can be subsequently mapped
and loaded to the Oracle Enterprise Performance Management Cloud.
You can use either be pre-packaged queries or customize BI reports to define your own
report parameters for extracting the data from the Oracle ERP Cloud.
Note:
Define any necessary filters to limit the amount of data returned by the BI
Publisher extract. Filters ensure the best load performance.
5. Set up the integration mapping between the Oracle ERP Cloud data source and the
target application by building an import format.
See Working with Import Formats.
6. Define the location used to associate the import format.
See Defining Locations.
7. Define data mapping to map the members from the source to target.
See Creating Member Mappings.
8. Define the data load rule and specify and source and target options.
See Defining Data Load Rule Details.
3-121
Chapter 3
Loading Data from the Oracle ERP Cloud
Note:
Web services require that you use your native user name and password
and not your single sign-on user name and password.
3-122
Chapter 3
Loading Data from the Oracle ERP Cloud
7. In Fusion Web Services URL, enter the server information for the Fusion web service.
For example, enter https://server.
For customers using release 19.01 and earlier of the Oracle ERP Cloud, use the old
WSDL to make the connection and then specify the URL in the following format:
https://server/publicFinancialCommonErpIntegration/ErpIntegrationService?
WSDL
For customers that implement after 19.01, simply copy and paste the server to the URL.
https://server//fscmService/ErpIntegrationService?WSDL
8. Click Test Connection.
9. Click Configure.
The confirmation "Source system [source system name] configuration has been updated
successfully" is displayed.
10. Click Save.
3-123
Chapter 3
Loading Data from the Oracle ERP Cloud
You can select dynamic filters to define as report parameters from the Oracle ERP
cloud data source when the actual parameter value needs to be defined at the data
load rule or application level.
An example of a dynamic filter is "Currency Type" where you can select either:
Entered, Statistical, or Total.
You can specify a single filter condition or multiple filters conditions, and additionally
specify the exact values that you want returned.
In some cases, you can change a static parameter value in the Report parameter list
by replacing it with a parameter value enclosed within $$ notations. This type of filter
applies to the Ledger ID and Period parameters.
For example, in the image below, the static parameter value argument1
= $LEDGER_NAME$ has been added to the Report Parameter List as a parameter:
On the Edit Application Filter screen, a display name has been entered for the
parameter. This is the name as it is shown in the rule or integration options.
In data load rules, this is how the parameter shows on the Source Options screen:
3-124
Chapter 3
Loading Data from the Oracle ERP Cloud
For more information, see Selecting Period Report Parameters from the Oracle ERP
Cloud.
6. To edit any display options for any dynamic report parameter, on the Application Filters
tab, click Edit.
7. On the Edit Application Filters screen, click Add.
8. From the Name field, select the name of the parameter.
9. In Display Prompt, enter the name of the display prompt for the filter on the Source
Options tab in Data Management or Edit Integration page in Data Integration.
10. In Display Order, specify the display order of the filter on the Source Options or Edit
Integration page.
3-125
Chapter 3
Loading Data from the Oracle ERP Cloud
If this field is blank, the custom filter cannot be displayed, and the default value is
used as the filter value.
For example, enter 99 to show the filter in the 99th position sequence or position in
a list of filters. Display orders are listed from lowest to highest.
11. In Display Level, select display level of the parameter (application and/or rule) to
indicate the level at which the filter is displayed.
12. In Validation Type, select None.
3-126
Chapter 3
Loading Data from the Oracle ERP Cloud
When you a run multi-period load, data is imported for the range and must be specified in the
START_PERIODKEY and END_PERIODKEY parameter list. In order for the system to load
the data into the right periods, source period mappings must exactly match the Year and
Period columns in the data extract.
Multi-period imports are available if the report accepts a period as a range. If the report
accepts only the period name (START_PERIODKEY parameter), no multi-period imports are
available.
3-127
Chapter 3
Loading Data from the Oracle ERP Cloud
3. From the Search and Select page, in Name, select a report or extract, and click
OK.
You can select any BI Publisher report just as long as it produces an output file in
a CSV format file. Not all reports in Fusion produce a CSV format file.
4. From Process Details, select the parameters for the extract or report, and click
Submit.
3-128
Chapter 3
Loading Data from the Oracle ERP Cloud
In the following example, "Ledger" is Vision Operations and "Amount type" is YTD or
PTD.
Be sure to specify the Accounting Period. The Accounting Period is the parameter that
will be set up in Data Management so that the report can be reused.
Note:
The integration to Oracle ERP Cloud will fail unless the selected extract on the
Oracle ERP Cloud side has one or more bind parameters passed from the EPM
Cloud. The bind parameter is a placeholder for actual values in the SQL
statement. Bind parameters must be enclosed in tilde (~~) characters. For
example, to use "Period" as a bind parameter specify: ~PERIOD~. The name
must exactly match the name specified in the SQL query.
To do this, create a bind parameter directly in the report, which is not
referenced in the Data Model query. In Data Management, specify a random
string such as "ABC" in the "Report Parameter List" that will be passed to the
bind parameter you created in the report definition.
3-129
Chapter 3
Loading Data from the Oracle ERP Cloud
When the report has been generated, the Output section shows the results of the
submission.
5. Click Republish, and then from the report output page, click csv.
6. Select the CSV output file, right click it, and then select Open.
3-130
Chapter 3
Loading Data from the Oracle ERP Cloud
3-131
Chapter 3
Loading Data from the Oracle ERP Cloud
11. From the Select screen, navigate to the folder where you saved the CSV file,
select it and click OK.
The report is saved as the target application and the Application Name is
populated automatically.
12. In Prefix, specify a prefix to make the application name unique.
The prefix is concatenated with the file name to form a unique application name.
For example, if you want to name an application with the same name as an
existing one, you can assign your initials as the prefix.
13. Click OK.
Data Management registers the application and returns all the columns in
Dimension Details.
15. Click Application Filters.
16. In Source System Name, specify the name of the Oracle Financial source
system.
For example, if the name of your source system is "ERP Cloud," specify ERP
Cloud.
3-132
Chapter 3
Loading Data from the Oracle ERP Cloud
You can use an Oracle ERP Cloud or GL source system name or define a new one.
For more information, see Configuring a Source Connection for an Oracle ERP Cloud
Source System.
17. In Report Name, specify the path and name of the report in the Oracle ERP Cloud.
Steps 17 - 23 show you how to get the report name from Oracle ERP Cloud. If you
already have the report path and name, enter the information in the Report name field (in
Data Management) and skip to step 24.
18. Navigate to Oracle ERP Cloud, find the report, and select Reports and Analytics to
retrieve the parameter information.
3-133
Chapter 3
Loading Data from the Oracle ERP Cloud
22. In the Custom Properties section, scroll down to the path field.
23. Copy the path (and name) and paste it to the Report Name field when registering
the target application in Data Management.
24. Return to Data Management and in the Report Parameter list, specify the report
parameters of the custom query.
Make sure you specify a random string such as "ABC" in the "Report Parameter
List" that will be passed to the bind parameter you created in the report definition.
If you create a report with a query that doesn’t have bind parameters passed from
the EPM Cloud, the process will fail on the Oracle ERP Cloud side.
Steps 24 - 25 explain how to get the report parameters from the BI Publisher
extract and then populate the Report Parameter List field with them in Data
Management.
3-134
Chapter 3
Loading Data from the Oracle ERP Cloud
25. Navigate to Oracle ERP Cloud, and from the Overview page, select the report and click
Resubmit.
This step enables you to view and capture the report parameters defined in the BI
Publisher extract or report.
27. Navigate to Data Management and paste the report parameter list from the Warnings
window into the Report Parameter List of your custom query.
The image below shows the report parameter pasted into the Report Parameter List
3-135
Chapter 3
Loading Data from the Oracle ERP Cloud
28. Set up the integration mapping between the Oracle ERP Cloud data source and
the target application by building an import format.
See Working with Import Formats.
29. Define the location used to associate the import format.
Privilege Description
GL_RUN_TRIAL_BALANCE_REPORT_PRIV Import data from the Oracle General
Ledger to the EPM Cloud.
GL_ENTER_BUDGET_AMOUNTS_FOR_FINA Write-back data from EPM Cloud to the
NCIAL_REPORTING_PRIV Oracle General Ledger.
FUN_FSCM_REST_SERVICE_ACCESS_INTEG Execute REST API used to perform the
RATION_PRIV integration
3-136
Chapter 3
Integrating Account Reconciliation Data
Privilege Description
GL_RUN_TRIAL_BALANCE_REPORT_PRIV Import data from the Oracle General Ledger to
the Oracle Enterprise Performance
Management Cloud.
FUN_FSCM_REST_SERVICE_ACCESS_INTEGRAT Execute REST API used to perform the
ION_PRIV integration
When importing data, you can assign one of the following custom roles to the integration user
Privilege Description
GL_RUN_TRIAL_BALANCE_REPORT_PRIV Import data from the Oracle General Ledger to
the EPM Cloud.
GL_ENTER_BUDGET_AMOUNTS_FOR_FINANCI Write-back data from EPM Cloud to the Oracle
AL_REPORTING_PRIV General Ledger.
FUN_FSCM_REST_SERVICE_ACCESS_INTEGRAT Execute REST API used to perform the
ION_PRIV integration
Allowlist
If you have enabled IP Allowlist in the Oracle ERP Cloud, then add the Oracle EPM Cloud IP
addresses to the list.
Refer to IP Allowlist for Web Service Calls Initiated by Oracle Cloud Applications (Doc
ID 1903739.1) for details.
3-137
Chapter 3
Integrating Account Reconciliation Data
Integrating BAI and SWIFT MT940 Format Bank File Transactions and
Balances
As an integration mechanism, Data Management provides an adapter-based
framework that enables Account Reconciliation customers to:
• add a Bank file as a source system (identified with an application type "Data
Source").
• associate either a BAI format bank file (which uses a Bank Administration Institute
file format) or a SWIFT MT940 format bank file (which uses a SWIFT MT940 file
format) with the Bank File source system, and then stage transactions to be
loaded to an Account Reconciliation target application.
Specific Data Management functions, such adding locations and member
mappings, are handled using the standard Data Management workflow process.
The loading of the data is also executed in Data Management.
• associate either a BAI format bank file (which uses a Bank Administration Institute
file format) or a SWIFT MT940 format bank file (which uses a SWIFTMT940 file
format) with the Bank File source system, and then stage balances to be loaded to
an Account Reconciliation target application. Balances are end of day bank
balances either posted once a month or on a daily basis.
Specific Data Management functions, such adding locations and member
mappings, are handled using the standard Data Management workflow process.
The loading of the data is also executed in Data Management.
• add a target application for each Transaction Matching data source as needed,
and then map the dimensions from a file-based source system (including a BAI file
or SWIFTMT940 file) to the Transaction Matching target application in the import
format. In this way a customer can easily import data from any source system by
way of a file format and push it to a Transaction Matching target application.
Specific Data Management functions, such adding locations and member
mappings, are handled using the standard Data Management workflow process.
When creating a target application for Transaction Matching, in the Import Format,
select Amount field from the target application instead of Data to load data
correctly.
Integrating BAI Format Bank File or SWIFT MT940 Format Bank File
Transactions
When loading bank file data, you create a data source associated with the bank file
source system. Data Management converts the BAI and SWIFT MT940 file formats
to .CSV formats.
The source application for BAI Format Bank File Transactions has the following pre-
defined constant columns and headers:
• Account
• Amount
• Transaction Type
• Currency
3-138
Chapter 3
Integrating Account Reconciliation Data
• Transaction Date
• Bank Reference
• Customer Reference
• Bank Text
The source application for a Swift MT940 Format Bank File Transactions file has the following
pre-defined constant columns and headers:
• Transaction Reference Number
• Account
• Statement Number
• Statement Date
• Transaction Date
• Amount
• Transaction Type
• Customer Ref
• Bank Ref
• Bank Text
• Additional Info1
• Additional Info2
• Additional Info3
The source application for a BAI Format Bank File Balance file has the following pre-defined
constant columns and headers:
• Closing Balance
• Currency (the account currency is extracted first. If it is unavailable, the group currency is
extracted. In most cases, the account currency and the group currency are identical.)
• Transaction Type
• Currency
• Statement Date
• Account
The source application for a Swift MT940 Format Bank File Transactions has the following
pre-defined constant columns and headers:
• Closing Balance
• Currency
• Transaction Type
• Currency
• Statement Date
• Account
To add a BAI Format Bank File or SWIFT MT940 Format Bank File Transactions source
system:
3-139
Chapter 3
Integrating Account Reconciliation Data
1. From the Home age, click (Navigator icon) and then from the Integration
category, select Data Management.
2. Select the Setup tab, and then under Register, select Target Application.
3. In Target Application, in the summary grid, click Add, and then select Data
Source.
4. From Source System, select Bank file.
5. From Application Name, select an application name from the list of values.
Available types of application include:
• BAI Format Bank File Transactions
• SWIFT MT940 Format Bank File Transactions
For a BAI Format Bank File Transactions file, the available application names are
a combination of match types and a data source name on that match type in
Transaction Matching. For example, in Transaction Matching, the match type
INTERCO has two data sources AP and AR. This results in two target application
names in the available list; INTERCO:AP and INTERCO:AR.
Note:
The Data Management connection to the BAI source file fails under the
following circumstances:
• The match type is changed in Transaction Matching.
• The data source ID changes.
• The data source attribute ID changes, or is added and removed.
In this case, you need to recreate the application (including the entire
target application, import format, location, mapping and data load rule in
Data Management.
For a SWIFT MT940 file, select SWIFT MT940 Format Bank File Transactions.
6. In Prefix, specify a prefix to make the source system name unique.
Use a prefix when the source system name you want to add is based on an
existing source system name. The prefix is joined to the existing name. For
example, if you want to name a Bank file application the same name as the
existing one, you might assign your initials as the prefix.
3-140
Chapter 3
Integrating Account Reconciliation Data
7. Click OK.
8. To add or modify dimensions in the Bank file source system, select the Dimension
Details tab.
The dimension details for a Bank file application are shown below:
9. Select the Target Dimension Class or click to select the Target Dimension Class
for each dimension that is not defined in the application.
The dimension class is a property that is defined by the dimension type.
10. Set up the integration mapping between Bank file source system and the Account
Reconciliation target application by building an import format.
See Working with Import Formats.
11. Define the location used to associate the import format.
Note:
Category mappings are not relevant for Transaction Matching, but they are
required in Data Management.
13. Define data mapping to map the members from the source to target.
Note:
All transaction matching files require the Reconciliation Id dimension to be
mapped to the corresponding Transaction Matching Profile.
3-141
Chapter 3
Integrating Account Reconciliation Data
When you execute a data load rule, the Point of View requires Location, Period,
and Category to be selected; however, Transaction Matching does not use the
Period and Category when processing the transactions. Only the correct Location
is required to be selected.
Note:
BAI codes 100-399 are for bank credits (positive numbers) and 400-699 are
for bank debits (negative numbers).
For Bank specific BAI codes which are greater than 699, Data Management
will treat them as bank credits (positive numbers) by default. If you need any
specific code in this range to be treated as bank debit (negative number),
then you can use SQL Mapping (see Creating Mapping Scripts) to update
the Amount as a negative number as in the following example.
AMOUNTX=
CASE
WHEN UD7 = '868' THEN AMOUNT*-1
ELSE AMOUNT
END
3-142
Chapter 3
Integrating Account Reconciliation Data
To add a BAI Format Bank File or SWIFT MT940 Format Bank File Balances source system:
1. Save the BAI Format Bank File or SWIFT MT940 Format Bank File Balances files as
CSV format file.
2. Upload the files using the file browser when registering the target application.
The following shows a BAI Format Bank File Balances file:
The following shows a SWIFT MT940 Format Bank File Balances file:
3-143
Chapter 3
Integrating Account Reconciliation Data
3. From the Home age, click (Navigator icon) and then from the Integration
category, select Data Management.
4. Select the Setup tab, and then under Register, select Target Application.
5. In Target Application, in the summary grid, click Add, and then select Data
Source.
6. From Source System, select Bank file.
7. From Application Name, select an application name from the list of values.
Available types of applications include:
• BAI Format Bank File Balances
• SWIFT MT940 Format Bank File Balances
3-144
Chapter 3
Integrating Account Reconciliation Data
For a BAI Format Bank File Balances file, the available application names are a
combination of match types and a data source name on that match type in Transaction
Matching. For example, in Transaction Matching, the match type INTERCO has two data
sources AP and AR. This results in two target application names in the available list;
INTERCO:AP and INTERCO:AR.
Note:
The Data Management connection to the BAI source file fails under the
following circumstances:
• The match type is changed in Transaction Matching.
• The data source ID changes.
• The data source attribute ID changes, or is added and removed.
In this case, you need to recreate the application (including the entire target
application, import format, location, mapping and data load rule in Data
Management.
For SWIFT MT940 file, select SWIFT MT940 Format Bank File Balances.
8. In Prefix, specify a prefix to make the source system name unique.
Use a prefix when the source system name you want to add is based on an existing
source system name. The prefix is joined to the existing name. For example, if you want
to name a Bank file application the same name as the existing one, you might assign
your initials as the prefix.
9. Click OK.
10. To add or modify dimensions in the BAI Format Bank File Balances file source system,
select the Dimension Details tab.
The dimension details for a BAI Format Bank File Balances file application are shown
below:
3-145
Chapter 3
Integrating Account Reconciliation Data
The dimension details for a SWIFT MT940 Format Bank File Balances application
are shown below:
11. Select the Target Dimension Class or click to select the Target Dimension
Class for each dimension that is not defined in the application.
The dimension class is a property that is defined by the dimension type.
12. Select the Setup tab, and then under Integration Setup, select Import Format.
13. Set up the integration mapping between BAI Format Bank File Balances source
system and the Account Reconciliation target application by building an import
format.
See Working with Import Formats.
An example of the import format for a BAI Format Bank File Balances application
is shown below:
3-146
Chapter 3
Integrating Account Reconciliation Data
An example of the import format for a SWIFT MT940 Format Bank File Balances
application is shown below:
14. Select the Setup tab, and then under Integration Setup, select Location.
16. Select the Workflow tab, and then under Data Load, select Data Load Mapping.
3-147
Chapter 3
Integrating Account Reconciliation Data
17. Map the account numbers in the file to the appropriate Reconciliation names.
Note:
All transaction matching files require the Reconciliation Id dimension to
be mapped to the corresponding Transaction Matching Profile.
18. Map the source type dimension needs Source type * to the hard coded "source
system" or "source sub-ledger" Target value.
20. Create a data load rule for the location and specify the period and category.
the data rule "BAIRule" is created and the BAI Format Bank Balances file format is
imported to the location "Loc_BAIFormat." The period is specified as "feb-20" and
category is specified as "Functional."
23. Create two rules for both import format executions as shown below.
3-148
Chapter 3
Integrating Account Reconciliation Data
24. Execute the data load rule by selecting Application and then Period.
25. Click to go to the action menu and then click Import Data.
26. Click +, then from New Data Load Execution, select Use saved data load, and select
the data load created in the previous step.
Balances are loaded to the reconciliation defined in the Data Load Mapping for the Profile
dimension as shown below. Bank balances are typically loaded to the Subsystem, but
can also be loaded to the Source System if needed.
3-149
Chapter 3
Integrating Account Reconciliation Data
1. From the Home page, click (Navigator icon) and then from the Integration
category, select Data Management.
2. Select the Setup tab, and then under Register, select Target Application.
3. In Target Application, in the summary grid, click Add, and then from Type, select
Local.
4. From Select Application, and then from the Type drop-down, select Transaction
Matching Data Sources.
5. In Application Name, enter the target application name for the Transaction
Matching data source.
6. In Prefix, optionally specify a prefix to make the application name unique.
The prefix supports up to a maximum of 10 chars max. The combination of the
reconciliation type and transaction matching data source name is auto generated.
For example, if the bank file import is for a Transaction Matching data source
named "BOA" and the reconciliation type text id is "BAI_LOAD_RT," you might add
the prefix "TM_" followed by "BAI_LOAD_RT" and then "BANK." In this case, the
application name would be "TM_ BAI_LOAD_RT:BANK."
In another example, if this a MT940 bank file import is for a transaction data
source named "SWIFT_MT940_MT" and the reconciliation type text id is "BANK",
then the target application name would start with a prefix (such as DEMO_),
followed by "SWIFT_MT940_MT", and then "BANK". In this case, the name is
"DEMO_SWIFT_MT940_MT:BANK. "
7. Click OK.
3-150
Chapter 3
Integrating Account Reconciliation Data
9. Select the Target Dimension Class or click to select the target dimension class for
each dimension that is not defined in the application.
The dimension class is a property that is defined by the dimension type.
10. Set up the source and target mapping between the source system and the Transaction
Matching target application by building an import format.
See Working with Import Formats.
The following shows the import format for a Bank file.
3-151
Chapter 3
Integrating Account Reconciliation Data
The SWIFT MT940 import format requires that you map the Reconciliation Id
dimension to the corresponding Transaction Matching reconciliations. You can
map other dimensions as needed.
See Creating Member Mappings.
14. In the Data Load Workbench, test and validate the data by executing the data load
rule to ensure that the data load rule is running properly, and your data looks
correct. Data Management transforms the data and stages it for Account
Reconciliation to use.
See Using the Data Load Workbench.
1. From the Home page, click (Navigator icon) and then from the Integration
category, select Data Management.
2. Select the Setup tab, and then under Register, select Target Application.
3. Add a new Transaction Matching target application or select an exiting one.
For information on adding a Transaction Matching target application, see Adding a
Transaction Matching Target Application.
4. Select the Dimension Detail tab.
3-152
Chapter 3
Integrating Account Reconciliation Data
When you select a Transaction Matching target application, the dimension details of the
application are populated automatically on the Dimension Detail tab.
Include only those dimensions that you want to aggregate when mappings dimensions.
For example, if you want to roll up only the merchant number, bank reference, credit card
type, or transaction date, include only these corresponding dimensions in your mappings.
5. Select the Target Dimension Class or click to select the target dimension class for
each dimension that is not defined in the application.
The dimension class is a property that is defined by the dimension type.
6. Click Save.
7. Select the Application Options tab.
You can also enable the aggregation option by selecting Data Load Rule, then Target
Options, then Aggregation and then Y (for yes).
3-153
Chapter 3
Integrating Account Reconciliation Data
The Aggregate option chosen on the Data Load Rule overrides the option chosen
in the Application Options.
9. Click Save.
10. Set up the source and target mapping between the source system and the
Transaction Matching target application by building an import format.
See Working with Import Formats.
11. Define the location used to associate the import format.
15. To view the Transaction Matching with the aggregated imported data, from the
Accounts Reconciliation home page, click Matching.
16. Click the Account Id to which the source accounts were mapped.
3-154
Chapter 3
Integrating Account Reconciliation Data
3-155
Chapter 3
Integrating Account Reconciliation Data
Note:
As a best practice recommendation while loading transactions throughData
Management, do not replicate your General Ledger or Subledgers in Account
Reconciliation. Loading activity from your ERP is not a best practice for
period end reconciliations. If you need to load more than a 100 transactions;
then as an implementer, you need to ask more questions to better
understand the customer's requirements. For a reviewer, a large number of
transactions for period-end reconciliation would be difficult to review. Use
cases with higher volumes of transactions are candidates for Transaction
Matching and not Reconciliation Compliance.
1. From the Home page, click (Navigator icon) and then from the Integration
category, select Data Management.
2. Select the Setup tab, and then under Register, select Target Application.
3-156
Chapter 3
Integrating Account Reconciliation Data
3. In Target Application, in the summary grid, click Add, and then select Local.
4. From Select Application page, and then from the Type drop-down, select
Reconciliaton Compliance Transactions.
3-157
Chapter 3
Integrating Account Reconciliation Data
Dimension names must match exactly with attribute names in Account Reconciliation.
If the dimension is for a standard attribute, its name should be exactly as specified
here and should not be changed.
By default "Profile" is mapped to the "Account" (Reconciliation Account ID) target
dimension class and "Period" is mapped to the "Period" target dimension class.
The following dimensions are assigned to the Attribute target dimension class and are
mapped to the ATTR1 to ATTR4 columns respectively. If mappings rules are needed
for these dimensions, change them to Lookup dimension types and map them to UD
(user-defined) columns. Attribute dimensions can have no mapping rules.
For more information about Lookup dimensions, see Adding Lookup Dimensions.
The following are standard dimensions and the names should not be changed.
Dimensions for unused currency buckets can be deleted.
3-158
Chapter 3
Integrating Account Reconciliation Data
Other standard dimensions are shown below. These can either be Lookup or attribute
dimensions. Because Reconciliation Compliance Transactions allow the same custom
attributes to be assigned to the transaction itself and its action plan, the system differentiates
between custom attributes for the transaction and custom attributes for the action plan. In this
case, the system prefixes Action Plan at the beginning of dimension names for action plan
attributes.
4. Select the Target Dimension Class or click , select the Target Dimension Class for
each dimension that you need to change, and then specify the target dimension class
from the drop-down.
3-159
Chapter 3
Integrating Account Reconciliation Data
5. Click Save.
3-160
Chapter 3
Integrating Account Reconciliation Data
5. Click Save.
3-161
Chapter 3
Integrating Account Reconciliation Data
The Reconciliation Compliance Transactions load uses the Period Key and Prior
Period Key defined in Data Management.
3-162
Chapter 3
Integrating Account Reconciliation Data
8. Click Save.
3-163
Chapter 3
Integrating Account Reconciliation Data
8. Click Execute.
9. On the Execute Rule page, complete the following options:
a. Select Import from Source.
Data Management imports the data from the source system, performs the
necessary transformations, and exports the data to the Data Management
staging table.
b. Select Export to Target.
Select this option after you have reviewed the data in the staging table and
you want to export it to the target application.
c. From Start Period and End Period, select the period defined for
Reconciliation Compliance Transactions.
d. Click Run.
3-164
Chapter 3
Integrating Account Reconciliation Data
3-165
Chapter 3
Integrating Account Reconciliation Data
11. In Prefix, optionally specify a prefix to make the application name unique.
When you add an Account Reconciliation Journal Adjustment data source, the
dimensions in application are populated automatically on the Dimension Detail tab.
These dimensions correspond to the Transaction Matching journal entry column
names in Account Reconciliation as shown below.
14. Map all dimension names in the Dimension Names column with the value
Generic in the Target Dimension Class column and click Save.
15. Create the target application by clicking Add and then Local.
16. On the Select Application screen, from Type, select Custom Application.
3-166
Chapter 3
Integrating Account Reconciliation Data
18. In the Application Details section, in the Name field, specify the name of the custom
application.
19. Select the Dimension Details tab.
23. On the Setup tab, under Integration Setup, select Import Format.
24. In Details and then in Name, specify the name of the import format.
25. In Source and Target, select the source and target for the journal adjustments.
26. In the Mapping section, map the Accounts Reconciliation Journal Adjustment source
columns and the custom target application columns.
For more information, see Working with Import Formats.
27. On the Setup tab, under Integration Setup, select Location.
30. Define the data load mapping to map the members from the source to target.
A location is the level at which a data load is executed in Data Management. Any import
format associated with the location is populated automatically in the Import Format field.
3-167
Chapter 3
Integrating Account Reconciliation Data
If multiple import formats have been associated with the location, you can browse
for them.
33. Select the Source Filters tab, and complete any parameters based on the
transaction matching type.
Available parameters:
• Type—Specify the type of reconciliation.
Available types:
– Transactions
– Adjustments
• Match Type—Specify the match type ID such as "Clearing."
Match Types determine how the transaction matching process works for the
accounts using that match type. They determine the structure of the data to be
matched, as well as the rules used for matching. Additionally, match types are
used to export adjustments back to an ERP system as journal entries in a text
file.
• Data Source—Specify the data source when the transaction matching
transaction type is "Transactions."
Leave this field blank when the transaction matching transaction type is
"Adjustments."
Names for the data sources that appear in Data Management are actually
sourced from the Transaction Matching data sources. The convention used in
the drop-down is Match Type Name: Data Source Name.
For example, application choices might include:
– InterCo3:AR
– InterCo3:AP1 3
– Bank BAI:BAI_Bank_File
– Bank BAI:GL
– INTERCO2:AR
– INTERCO2:AP
– INTERCO:AR 8
– INTERCO:AP 9
– CLEARING:CLEARING
• Filter—If you choose Type as the Transaction, specify the filter name for
transactions.
The filters is defined in data source configuration in Account Reconciliation as
shown below:
If you choose Type as Adjustment, specify the filter value in JSON format.
You can select specific transaction types and/or the accounting date while
exporting the journal for Adjustments
For example, you can select all transaction types except transaction types of tax
until month end.
To specify the filter for Adjustments, use the Filter field to select the following:
3-168
Chapter 3
Integrating EPM Planning Projects and Oracle Fusion Cloud Project Management (Project Management)
• (Adjustment) Type—Specify the adjustment type available for the match type
selected in the previous step. You can specify one or more values. If you don't select
a value, the default used is All.
• Adjustment Date—Specify the operand and date values (using the Date Picker to
select the dates). The operands available for filtering are: EQUALS, BEFORE,
BETWEEN, and AFTER.
The date format must be YYYY-MM-DD. If you use EQUALS, BEFORE, and AFTER
operands, use the JSON format: accountingDate and then specify the accounting
date. If you select a BETWEEN operand, use the JSON format:
– fromAccountingDate for the "from" Accounting Date
– toAccountingDate for the "to" Accounting Date
Here are some sample JSON formats:
34. In Data Load Workbench, test and validate the data by executing the data load rule to
ensure that the data load rule is running properly, and your data looks correct. Data
Management transforms the data and stages it for Account Reconciliation to use.
See Using the Data Load Workbench.
For information on running the data load rule using EPMAUTOMATE, see the rundatarule
topic in the Working with EPM Automate for Oracle Enterprise Performance Management
Cloud .
3-169
Chapter 3
Integrating EPM Planning Projects and Oracle Fusion Cloud Project Management (Project Management)
With this integration, the same Indirect and Capital projects are visible in both EPM
Planning Projects and Project Management depending on the cadence of the
synchronization. The capabilities include:
• Transfer projects and budgets created in EPM Planning Projects to Project
Management. The strategic budget is created in Project Management as a
baseline budget at the resource class level.
• Use the budget approval validation to validate the detailed budgets created by
project managers vs. the strategic budgets created in EPM Planning Projects
(Optional).
• Transfer actual cost amounts from Project Management to EPM Planning Projects
at the resource class level.
• Transfer re-planned budgets from EPM Planning Projects to Project Management
at the resource class level.
You use Data Management and Data Integration to drive the integration of data
between EPM Planning Projects and Project Management. Data Management and
Data Integration provide an out of box solution that enables EPM Planning Projects
customers to apply predefined mappings from the Project Management data model to
target dimensions. You can also customize and extend these integrations, for example,
by applying other mappings as needed to meet your business requirements.
For more information see Integrating EPM Planning Projects and Project
Management.
3-170
4
Integration Tasks
Related Topics
• Working with Import Formats
• Defining Locations
• Defining Period Mappings
• Defining Category Mappings
• Loading Data
• Data Load, Synchronization and Write Back
• Logic Accounts
• Check Rules
Import formats are created for a single accounting entity. However; if you are importing data
from multiple accounting entities that have the same Chart of Accounts, define one import
4-1
Chapter 4
Working with Import Formats
format using a representative accounting entity, and then use it for importing data for
all accounting entities with the same Chart of Accounts.
4-2
Chapter 4
Working with Import Formats
To define import formats for file-based mappings, see Defining Import Formats for File-
Based Mappings.
4-3
Chapter 4
Working with Import Formats
Querying by Example
You can filter the import formats in the Import Format summary section using the
Query by Example feature. To filter by Import Format Name, ensure that the filter row
is displayed above the column headers.
To query by example:
1. On the Setup tab, under Integration Setup, select Import Format.
4-4
Chapter 4
Working with Import Formats
2. In Import Format, from the Import Format Mapping grid, select the file-based source
column.
3. In Expression, specify the import expression.
4. Optional: You can also specify the expression type and value on the Add Expression
field.
a. Click .
b. In Add Expression, under Expression Type, select the expression type.
The number and types of expressions available depend on the field that is being
modified (for example, Account or Account Description).
c. In Expression Value, enter the value to accompany the expression and click OK.
5. In Import Format Mapping, click OK.
For example, if positive numbers are followed by DR (1,000.00DR), and negative numbers
are followed by CR (1,000.00CR), the expression is Sign=DR,CR.
Numbers within <> are also treated as negative. For example, if you specify (100.00) and
<100.00> both are treated as negative numbers.
If positive numbers are unsigned (1,000.00), and negative numbers are followed by CR
(1,000.00CR), the expression is Sign=,CR.
4-5
Chapter 4
Working with Import Formats
When the file is imported, credit amounts are assigned negative signs (and thus are
interpreted as positive), and debit amounts are unchanged (and thus are interpreted
as negative).
4-6
Chapter 4
Working with Import Formats
Processing Order
For all fields except the Amount field, Data Management processes stacked expressions by
Fill or FillL order:
For the Amount field, Data Management processes stacked expressions in the following
order:
1. DRCRSplit
2. Fill=EuroToUS
3. Sign
4. Scale
5. NZP
4-7
Chapter 4
Working with Import Formats
4-8
Chapter 4
Working with Import Formats
Note:
If you integrate
an Financial
Consolidation
and Closeor
source with an
explicit period
mapping type,
the system
stores Tax
Reporting the
mapping year
(SRCYEAR) and
mapping period
(SRCPERIOD) in
the ATTR2
column and year
in ATTR3
columns. For this
reason when
importing data
from Financial
Consolidation
and Close,
attribute columns
ATTR2 and
ATTR3 should
not be used for
any other
dimension
mappings.
4-9
Chapter 4
Working with Import Formats
Similarly when
you map a
Movement
source attribute
to any target
dimension, the
system
automatically
creates another
map for mapping
the Movement to
the ATTR1
column.
Note:
You may
encounter issues
with loading data
if the currency is
not specified
correctly.
4-10
Chapter 4
Working with Import Formats
To define an import format for numeric data files with a fixed length:
Note:
For information about defining import formats for fixed length all data type data files,
see Setting the Import Format Data Types.
Note:
For information about defining import formats for delimited all data type data files,
see Setting the Import Format Data Types.
4-11
Chapter 4
Working with Import Formats
4-12
Chapter 4
Working with Import Formats
Note:
The Import Format Builder does not support tab delimited files.
4-13
Chapter 4
Working with Import Formats
4-14
Chapter 4
Working with Import Formats
Note:
The All Data Type with Security loads only to the currency specified in the import.
Note:
The All Data Type load method is not supported for Profitability and Cost
Management.
Note:
To load numeric data, use the Numeric Data Only load method.
1. Select the Setup tab, and then under Register, select Target Application.
2. In Target Application, in the Target Application summary grid, click Add, and then
select either Local or Cloud.
4-15
Chapter 4
Working with Import Formats
Available options are Cloud (for a Cloud deployment) or Local (for an on-premise
deployment).
3. In Target Application under Application Options, from the Load Method drop-
down, select all data types with security.
Loading Incremental Data using the LINEITEM Flag to an EPM Cloud Application
You can include line item detail using a LINEITEM flag in the data load file to perform
incremental data loads for a child of the data load dimension based on unique driver
dimension identifiers to an Oracle Enterprise Performance Management Cloud
application. This load method specifies that data should be overwritten if a row with the
specified unique identifiers already exists on the form. If the row does not exist, data is
entered as long as enough child members exist under the data load dimension parent
member.
For example, you can load employee earnings detail from the following sample source
data file to a target EPM Cloud application.
Emp,Job,Pay Type,Amount
"Stark,Rob",Accountant,Bonus_Pay,20000
4-16
Chapter 4
Working with Import Formats
"Molinari,Sara",Sales Manager,Bonus_Pay,22000
"Matthew,Peter",Sales Associate,Bonus_Pay,5000
When using the LINEITEM syntax, the data file may contain records having identical
dimensions except driver member values.
In the following data file, records have the same dimensions but differ on the value of the
acct_date column (a driver member). This requires you to identify driver member(s) which
make the data record unique (that is, the. acct_date column for the example.
Entity,Employee,Version,asl_EmployeeType,acct_date,acct_text,SSTax Rate1
<LINEITEM("ParentMember")>,No Employee,Baseline,Regular,1-1-2001,Text1,0.4
<LINEITEM("ParentMember")>,No Employee,Baseline,Regular,1-1-2002,Text2,0.4
<LINEITEM("ParentMember")>,No Employee,Baseline,Regular,1-1-2003,Text3,0.5
To support the above use case, create a LOOKUP dimension and map driver member
column to it in the Import Format option. The name of the dimension must start with
LineItemKey. For example, create a LOOKUP dimension named LineItemKey and assign
any Data Column Name (such as UD8). In the Import Format option, map LineItemKey
dimension to 5th column (acct_date) in the data file and use the LIKE (* to *) data mapping.
You may also use other types of data mappings to populate the look up dimension. If needed,
create more LOOKUP dimensions to uniquely identify data records. The rest of the setup is
same.
To use this feature, you need to perform steps both in Planning and Data Management.
1. Launch Planning.
2. From the Data Load Settings screen, select the Data Load Dimension and Driver
Dimension.
In Planning, Earning1 and Earning2, are members of the Account dimensions. The
various Earnings Types are loaded to the No Property member of the Property
dimension, and the Earning value is loaded to the OWP_Value of the Property
dimension.
4-17
Chapter 4
Working with Import Formats
For more information about the Data Load Settings screen, see Oracle Hyperion
Planning Administrator's Guide .
3. Launch Data Management, then select Setup, and then select Import Format.
4. From the Import Format Mapping grid, select the data source column.
5. In Expression, add an import expression for the data driver.
For example, add the import format expression: Driver=Property;member="No
Property","OWP_value";Column=3,4.
For more information about adding drivers Data Management, see Adding an
Import Expression for a Data Driver and Assigning Driver Dimension Members.
6. From Workflow, select Data Load Mapping.
In Data Load Mapping, you identify how source dimensionality translates to the
target dimensionality. As shown below for a "Like" mapping, the Earning source
value (represented by the asterisk) is loaded to OWP_Total Earnings of the
Account dimension.
4-18
Chapter 4
Working with Import Formats
10. In Data Dimension for Auto-Increment Line Item, select the data dimension that
matches the data dimension you specified in Planning.
In this example, the data dimension is Account.
11. In Driver Dimension for Auto-Increment Line Item, select the driver dimension that
matches the driver dimension you specified in Planning.
In this example, the driver dimension is Property.
4-19
Chapter 4
Working with Import Formats
4-20
Chapter 4
Working with Import Formats
• For the multi-column type, you can use a header, multi-row header, or no header
specified in the import format. These are the different formats:
Note:
In the import format you must have a column definition for the driver dimension
defined in the data field. If your driver is "Account," then your import format
must include a source column and field or start and end period for the account
dimension. This must be a valid field in the file, or a valid start and end position
in the file. This is not referenced by the process, but it must be valid for the
process to execute.
– For a file with a header record, use the format Driver=<Dimension Name>;
Header=<Row Number>; Column=<Column Numbers>.
For example, when the import format definition
Driver=Account;HeaderRow=1;Column=2,4 is applied to the following sample data
file:
Entity,ACCT1,ACCT2,ACCT3
Entity01,100,200,300
This tells the system that row 1 is the header, and data starts in row 2. In column 2,
the entity is the first value, and then the next three columns are the values for
ACCT1, ACCT2 and ACCT3.
– For a file with multiple row headers (driver members don’t line up with the data
column), you can use a modified header expression. For example, when you export
data from Essbase as in the following data file, the data column header is a new row
and does not line up data.
"Period","Consolidation","Data Source","Currency","Intercompany","Entity","Movement","Multi-GAAP","Product","Scenario","Years","View","Account"
"FCCS_Sales","FCCS_Cost of Sales"
"Jan","FCCS_Entity Input","FCCS_Data Input","Entity Currency","FCCS_No Intercompany","01","FCCS_No Movement","FCCS_Local
GAAP","P_110","Actual","FY15","FCCS_Periodic",3108763.22,2405325.62
"Jan","FCCS_Entity Input","FCCS_Data Input","Parent Currency","FCCS_No Intercompany","01","FCCS_No Movement","FCCS_Local
GAAP","P_110","Actual","FY15","FCCS_Periodic",3108763.22,2405325.62
With a multi row header, you identify header row that contains the driver information
to the system. When the header row is specified as Header=2,1, this means that the
header starts at row 2, and the driver members start at column 1.
In another example, say your second header is A,B,C,D and columns are 10 to 13 for
these values. If you set column expression to 10|12,13, then the B member and its
values (at column 11) are skipped.
– To load multiple columns without a header record in the data file, use the import
format definition Driver = <Dimension Name>; Member = <List of Members>;
Column=<Column Numbers>. Use this method when to skip a source column in the
source record.
For example, when the import format definition Driver=Account;member=ACCT1,
ACCT2, ACCT3;Column=2,4; is applied to the following data file:
Entity01,100,200,300
4-21
Chapter 4
Working with Import Formats
you tell the system to include entity as the first value, and then for the next
three columns to use driver dimension members values from ACCOUNT;
ACCT1, ACCT2 and ACCT3.
• For data source application types, you assign the driver dimension, but the system
assigns row 1 as the header. You can load multiple columns by selecting the
columns from the Add Import Format Mapping Expression screen.
a. Click .
b. From the Expression Type drop-down, select Driver.
c. In Add Import Format Mapping Expression when entering a driver, enter the
values for the expression and click OK.
4-22
Chapter 4
Working with Import Formats
In Header row, select the header row of the file for the expression.
In Column(s), specify the data columns in the expression. To use a range of DATA
columns, specify columns using a comma (,). To use non-contiguous DATA columns,
specify columns using the pipe (|) delimiter.
d. Click OK.
In the following example, the "Project Element" is the driver member of the first
header row, and includes contiguous rows "2,3", and non-contiguous rows "5,7".
4-23
Chapter 4
Working with Import Formats
4. In the Value field, enter the name of the driver dimension member to use in the
header record or member expression.
5. Optional: To search on driver dimension members, click the Search button and
navigate to the driver dimension on the Member Selection screen.
4-24
Chapter 4
Working with Import Formats
6. Click Save.
4-25
Chapter 4
Working with Import Formats
• text data file with multiple columns of numeric data to a period or any other
dimension as a column header by specifying the:
– column header in the data file
– column header member list in the import format
– column header member in the data rule
• Excel data file with multiple columns of numeric data to a period as a column
header. The Excel file may or may not contain a header.
To load multi-column numeric data:
1. On the Setup tab, under Integration Setup, select Import Format.
2. In the Import Format summary task bar, select Add.
In the upper grid of the Import Formats screen, a row is added.
3. In Name, enter a user-defined identifier for the import format.
You cannot modify the value in this field after a mapping has been created for this
import format.
4. In Description, enter a description of the import format.
5. In Source, select File for the source.
6. From File Type drop-down, select Multi Column - Numeric Data as the format of
the file.
7. From the File Delimiter drop-down, select a type of delimiter.
Available delimiter symbols:
• comma (,)
• exclamation (!)
• semicolon (;)
• colon (:)
• pipe (|)
• tab
• tilde (~)
8. In Target, select EPM and select any EPM application as a target.
9. Optional: In Drill URL, enter the URL used for the drill-through.
10. In the Mapping section, select the Amount dimensions and click .
11. From the Expression Type drop-down, select Column=start,end.
4-26
Chapter 4
Working with Import Formats
You can import a contiguous set of columns or a non-contiguous set of columns. To use a
range of Amount (data) columns, specify columns using a comma (,). To use non-
contiguous amount columns, specify columns using the pipe (|) delimiter.
You specify contiguous columns by using starting and ending columns. For example,
5,10 indicates columns 5 through 10.
You specify non-contiguous columns by using column1 | column2 | column3. For
example, 5|7|10 indicates import columns 5, 7, and 10.
13. Optional: Specify any drivers and header rows of the file for the expression.
To load a text data file with multiple columns of numeric data to a period:
1. Complete steps 1-12 in the To load multi-column numeric data.
2. From the Expression Type drop-down, select Driver.
3. On the Add Import Format Mapping Expression, in Dimension, leave the default
driver dimension Period.
4. In Period(s), select the period driver dimension member to load and click OK.
Specify the period using quotes. For example, you might enter: "Dec-9".
If you do not specify a period driver member dimension on the Add Import Format
Mapping Expression, you can specify period members in the data load rule. See steps
5-11.
4-27
Chapter 4
Working with Import Formats
5. On the Workflow tab, under Data Load, select Data Load Rule.
6. On the Data Load Rule screen, select the POV to use for the data load rule.
7. Add or select the data load rule to use for the multi-column numeric data load.
8. In Import Format, select the import format set up for the multi-column numeric
load.
9. Optional: From the Source Options tab, specify any source options.
10. Select the Column Headers tab, and specify the start date and end date of the
numeric columns.
You are prompted to add the start and end dates on the Column Headers tab
when:
• a text data file has no header in the header record of the data file, in the import
format, or data rule.
• you are using an Excel file in all cases. If header information is specified in the
Excel file, only periods that fall within the start and end period range are
processed.
4-28
Chapter 4
Working with Import Formats
processing times due to archiving of data for audit purposes. Workflow mode options provide
scalable solutions when processing large volumes of data, or when an audit is not required,
and performance is a key requirement.
The three workflow mode options are:
• Full
• Full (no archive)
• Simple
The Full option is the default flow for loading data. Data is loaded in the standard way
between the staging tables, data can be viewed in Workbench, and drill down is supported.
The Full (no archive) option loads data in the same manner as the full mode, but data is
deleted from the staging tables at the end of the data load process. Data can be viewed in
the Workbench only after the import step. No drill down is available with the full (no archive)
mode. This method is useful when you want to review and validate the data during load
process but auditing or drill down is not a requirement. This method does not provide
performance improvement but limits space usage since data is not persisted for future
reference.
The Simple option limits data movement between the staging tables. No drill down is
available, and the data cannot be viewed in the Workbench. This method provides
performance improvement and is useful when you do not require auditing or drill down.
Note:
You can use import expression mapping or traditional dimension mapping for any of
the Workflow modes.
Note:
If you use the Simple Workflow mode to load data (see Using Workflow Modes and
you run a check rule with target intersections, then include a check entity group
(see Creating Check Entity Groups). Otherwise the check rule fails. In addition,
other than the Full workflow mode, no check reports are not available after the
export step has completed.
4-29
Chapter 4
Defining Locations
3. After defining the application details in Application Detail, select the Application
Options tab.
4. From Workflow, select the mode option and click Save.
Defining Locations
A location is the level at which a data load is executed in Data Management. You
define locations to specify where to load the data. Additionally, Locations enable you to
use the same import format for more than one target application where the
dimensionality of the target applications is the same.
The Location feature also enables you to specify free form text or a value using the
integration option feature. Text or values entered for a location can be used with your
Data Management scripts.
Note:
You can create duplicate locations with the same source system and
application combination.
4-30
Chapter 4
Defining Locations
Note:
For Financial Consolidation and Close and Tax Reporting customers: To load
data to actual currency rather than entity currency when the currency is fixed,
set the currency in the Functional Currency field in the Location option. You can
also add a Currency row in the import format and map it. See Defining the
Import Format.
Financial Consolidation and Close can also specify Parent Input, Contribution
Input, and Translated Currency Input in this field to create and post journals to
different currencies other than the entity currency.
4-31
Chapter 4
Defining Locations
Note:
For Tax Reporting applications, a rates cube does not have a
consolidation dimension. For this reason, leave this field blank so that
you can load exchange rates for Tax Reporting applications.
9. Optional: In Logic Account Group, specify the logic account group to assign to
the location.
A logic group contains one or more logic accounts that are generated after a
source file is loaded. Logic accounts are calculated accounts that are derived from
the source data.
The list of values for a logic group is automatically filtered based on the Target
Application under which it was created.
4-32
Chapter 4
Defining Period Mappings
10. Optional: In Check Entity Group, specify the check entity group to assign to the
location.
When a check entities group is assigned to the location, the check report runs for all
entities that are defined in the group. If no check entities group is assigned to the
location, the check report runs for each entity that was loaded to the target system. Data
Management check reports retrieve values directly from the target system, Data
Management source data, or Data Management converted data.
The list of values for a check entity group is automatically filtered based on the Target
Application under which it was created.
11. Optional: In Check Rule Group, specify the check rule group to assign to the location.
System administrators use check rules to enforce data integrity. A set of check rules is
created within a check rule group, and the check rule group is assigned to a location.
Then, after data is loaded to the target system, a check report is generated.
The list of values for a check rule group is automatically filtered based on the Target
Application under which it was created.
12. Click Save.
• To edit an existing location, select the location to modify, and then make changes as
necessary. Then, click Save.
• To delete a location, click Delete.
When a location is deleted, the location is removed from all other Data Management
screens, such as Data Load.
Tip:
To filter by the location name, ensure that the filter row is displayed above the
column headers. (Click to toggle the filter row.) Then, enter the text to filter.
4-33
Chapter 4
Defining Period Mappings
Before you can define data rules, define the period mappings. Period mappings define
the mapping between Enterprise Resource Planning (ERP) calendars and the EPM
application year or periods. You can define period mappings in three ways:
• Global Mapping—You define a global mapping in cases where you do not have
many target applications getting data from multiple source systems with different
types of source calendars. Use a global mapping to ensure that various periods
are accommodated in the individual mapping. As a first step, define a global
mapping.
• Application Mapping—If you have multiple target applications, getting data from
various source systems with complex period types, you can create application
mappings in addition to global mappings. When you define an application
mapping, you can modify the Target Period Month as necessary.
• Source Mapping—Specifies source period mapping for adapter-based
integrations.
Note:
Source mappings are also used to set up Oracle General Ledger
adjustment periods. For more information, see Processing Oracle
General Ledger Adjustment Periods.
Note:
You should define global mapping at the most granular level. For example, if
you have a monthly calendar and a weekly calendar, define your global
mapping at the lowest level of granularity. In this case, the period keys are at
the week level and you map weeks to months. You can create application
mappings for the higher-level periods.
Period Key Prior Period Period Target Target Target Target Year Target
Key Name Period Period Period Year Period Day
Month Quarter
Jan 1 2010 Dec 1 2009 January 1, Jan Q1 FY10
2010
Feb 1 2010 Jan 1 2010 February 1, Feb Q1 FY10
2010
Mar 1 2010 Feb 1 2010 March 1, Mar Q1 FY10
2010
April 1 2010 March 1 April 1, 2010 Apr Q2 FY10
2010
May 1 2010 April 1 2010 May 1, 2010 May Q2 FY10
4-34
Chapter 4
Defining Period Mappings
Period Key Prior Period Period Target Target Target Target Year Target
Key Name Period Period Period Year Period Day
Month Quarter
Jan 26 2009 Jan 19 2009 January 26, Jan Q1 FY09
2010
Feb 2 2009 Jan 26 2009 February 2, Feb Q1 FY09
2010
Feb 9 2009 Feb 2 2009 February 9, Feb Q1 FY09
2010
Feb 16 2009 Feb 9 2009 February 16, Feb Q1 FY09
2010
Table 4-9 Sample Application Mapping—Target Application #1 with a Monthly Calendar Source
Period Key Target Period Target Period Target Period Target Period Year Target
Month Quarter Year Day
Jan 1 2009 Jan Q1 FY09
Feb 1 2009 Feb Q1 FY09
Mar 1 2009 Mar Q1 FY09
Table 4-10 Sample Application Mapping—Target Application #2 with a Weekly Calendar Source
Period Key Target Period Target Period Target Period Target Period Year Target
Month Quarter Year Day
Jan 26 2009 Jan Q1 FY09
Feb 2 2009 Feb Q1 FY09
Feb 9 2009 Feb Q1 FY09
Feb 16 2009 Feb Q1 FY09
4-35
Chapter 4
Defining Period Mappings
Note:
To avoid double counting on Income Statement accounts, be sure not to
define a mapping where the adjustment period of one year goes into the
period of the next fiscal year.
Global Mappings
You can define one global mapping to map various periods to the individual mapping.
To define a global mapping:
1. On the Setup tab, under Integration Setup, select Period Mapping.
2. Select the Global Mapping tab.
3. Click Add.
4. Select the Period Key.
5. Select the Prior Period Key.
6. Enter the following:
a. Period Name; for example, August 2005.
b. Target Period Month; for example, August.
c. Target Period Quarter
d. Target Period Year
e. Target Period Day
f. Year Target
7. Click Save.
Application Mappings
You can define application mappings in cases where you want to define a special
period mapping for a specific target application. The mappings that you create here
apply to an individual target application.
To create period mappings for an application:
1. On the Setup tab, under Integration Setup, select Period Mapping.
2. Select the Application Mapping tab.
3. In Target Application, select the target application.
4. Click Add.
5. Select the Period Key.
6. Enter the following:
a. Target Period Month
b. Target Period Quarter
4-36
Chapter 4
Defining Period Mappings
Source Mappings
Source mappings include explicit and adjustment period mappings. You can create explicit
period mappings to ensure that the Data Management periods map correctly to the source
system calendar periods. An adjustment period mapping is used only when you select the
Include Adjustment Periods option when creating the data load rule.
The Source Mapping tab consists of two areas:
• Master—Selects the source system and mapping type.
• Grid—Defines the period mapping. The mapping can be defined only for periods defined
on the Global Mapping. New Data Management periods cannot be created on this tab.
Note:
In Data Rules, you can choose between Default period mapping and Explicit Period
mapping. If you choose Period mapping, then source periods are mapped based on
the period key and previous period.
Note:
Period names cannot include spaces if used in a batch script.
7. Enter the source system Period Key, and then click OK.
8. Enter the source system Calendar, and then click OK.
9. Enter the source system GL Period, and then click OK.
The GL Period Number is prefilled based on the Period Name.
10. Enter the source system GL Name, and then click OK.
4-37
Chapter 4
Defining Period Mappings
5. Click to select the source system Period Key, and then click OK.
6. Click to select the source system Calendar, and then click OK.
7. Click to select the source system Adjustment Period, and then click OK.
8. Optional: Enter a description for the mapping.
9. Click Save.
1. Select Source Mapping.
2. In Source System, select the source system.
3. Click Add.
4. In Mapping Type, select Budget.
5. In Period Name, specify the period name.
Note:
Period names cannot include spaces if used in a batch script.
7. Enter the source system GL Period, and then click OK. You can also or click
to search for and select the General Ledger period name.
The GL Period Number is prefilled automatically based on the Period Name.
8. Optional: Enter a description for the mapping.
9. Click Save.
Tip:
To delete a mapping, select the mapping, and then click Delete.
4-38
Chapter 4
Defining Category Mappings
Global Mappings
You can define one global mapping to map various Scenario dimensions to the individual
mapping.
The global category mapping lets you define mappings that cross multiple applications. For
example, you may have a case where a source category of an actual maps to a target of an
actual in most cases. But you may have a case where you have a target application where
the actual maps to current. In this case, it provides the ability to override the global mapping
on an application basis.
Note:
Avoid using special characters in names or spaces if you plan to use batch scripts.
Some characters may cause issues when run from a command line.
4-39
Chapter 4
Loading Data
Application Mappings
Unlike global mappings, application mappings can be defined for a target application.
To define application category mappings:
1. On the Setup tab, under Integration Setup, select Category Mapping.
2. In Category Mappings, select the Application Mapping tab.
3. From Target Application, select the target application.
4. Click Add.
A blank entry row is displayed.
5. Select the category.
Loading Data
Data Management is a solution that allows business analysts to develop standardized
financial data management processes and validate data from any source system—all
while reducing costs and complexity. Data Management puts the finance user in total
control of the integration process to define source data, create mapping rules to
translate data into the required target format, and to execute and manage the periodic
data loading process.
4-40
Chapter 4
Loading Data
• Like—The string in the source value is matched and replaced with the target value.
The following table is an example of a member mapping, where three segment members,
Cash-101, Cash-102, and Cash-103 map to one EPM member Cash.
You can use special characters for the source values. See Using Special Characters in the
Source Value Expression for Like Mappings and Using Special Characters in the Target Value
Expression.
Note:
Target values for multi-dimensional mapping must be an explicit member name.
Wildcard or special characters are not supported
4-41
Chapter 4
Loading Data
• Like—The string in the source value is matched and replaced with the target
value. For example, the source value "Department" is replaced with the target
value "Cost CenterA". See Creating Mappings Using the Like Method.
When processing the source values for transformations, multiple mappings may
apply to a specific source value. The order of precedence is Explicit, Between, In,
Multi-Dimension, and Like. Within Between and Like types, mappings can overlap.
The rule name determines precedence within a mapping type. Rules are
processed in alphabetical order of the rule name within a mapping type. Numbers
may also be used to help with ordering. For example, if numbering by tens or one
hundred, insert new rules between existing ones. For example, if rules are
numbered 10, 20, and 30, add a rule that starts with 25 so that you do not need to
rename other rules.
Note:
Avoid using special characters in names or spaces if you plan to use
batch scripts. Some characters may cause issues when run from a
command line.
4-42
Chapter 4
Loading Data
4-43
Chapter 4
Loading Data
Note:
When using multi-dimensional mapping, the source needs to be less than or
equal to 75 characters.
4-44
Chapter 4
Loading Data
• Explicit
• Between
• Like
• In
12. In Value, specify the dimension member name.
14. Select Apply to Rule to apply the mapping only to a specific data rule in the location.
For other data rules in the location the mappings are not applied.
By default, mappings specified at a location are applicable to all data rules in a location.
15. Click Save.
4-45
Chapter 4
Loading Data
7. To reverse the sign of the target account specified, select Change Sign.
8. Enter the Rule Name.
9. In Description, enter a description of the Like.
10. Select Apply to Rule to apply the mapping only to a specific data rule in a
location.
For other data rules in the location the mappings are not applied.
By default, mappings specified at a location apply to all data rules in a location.
11. Click Save.
Using Special Characters in the Source Value Expression for Like Mappings
The Source and Target Value expressions can have one or more special characters.
Special characters are supported for Like mappings only.
• Asterisk (*)
An asterisk (*) represents the source value. The asterisk (*) can be prefixed or
suffixed by one or more characters, which filters the source value by that prefix or
suffix. The wild card takes whatever is present in the source and puts it in the
target column, usually adding a prefix.
• Question Mark (?)
The question mark (?) strips a single character from the source value. You can use
one or more question marks (?) in the expression. You can also use question
marks in combination with other expressions. For example, A?? finds members
that start with A and have any two characters following and selects the members
or strips off the two characters.
• <1>, <2>, <3>, <4>, <5>
Processes rows that have concatenated values and extracts the corresponding
value. The source member must use the "_" character as the separator.
Note:
<1>, <2>, <3>, <4>, <5> can be used with a question mark (?) but
cannot be used with an asterisk (*).
• <BLANK>
Processes only rows that contain the blank character (space).
The system only reads the expression where the source member is ‘ ‘ as
<BLANK>. In this case, single quotes surround a single space character. If the
source has NULL, which is shown like,, or as a space surrounded by " " , then
the system does not interpret the NULL as a <BLANK>. Only the ‘<space char>
expression is interpreted.
4-46
Chapter 4
Loading Data
Note:
The <BLANK> notation may be used in both source and target expressions. If
used in a target expression, it writes a blank space to the target.
4-47
Chapter 4
Loading Data
Note:
In Data Management, Jython script is not supported for conditional mapping
(#SCRIPT cannot be used in the Target value column.)
4-48
Chapter 4
Loading Data
Data Management does not perform an error check or validate the script. You need to
test the script on your data files in a test environment and verify the results.
9. In Rule Name, specify the data load rule to use with the mapping script.
10. Click Save.
4-49
Chapter 4
Loading Data
6. In Rule Name, enter the data rule name for the mapping.
7. Click Save.
4-50
Chapter 4
Loading Data
Result:
1000 = A1000
Target Value:
*_DUP
Result:
1000 = 1000_DUP
4-51
Chapter 4
Loading Data
Component Description
#FORMAT Indicates that a mapping type of FORMAT is
specified in the target member.
<format mask> User defined format mask with the following
characters used to define the format:
• "?"—Include a character from a specific
position in the source member or segment
within a member.
• "#"—Skip or drop a character from the
source when creating the target member.
• "character"—Include the user defined
character on the target "as- is". Used for
prefixing, suffixing or any fixed string or
required character. This can be used in
conjunction with the special format mask
characters.
• "*"—Include all characters from the source
segment or source. When "*" is used as
the only format mask character in a
segment, then the entire segment value is
copied from the source.
When "*" is used in conjunction with "#" or
the "?" character, then all remaining and
unused characters are brought over.
"*" is a wildcard character that takes the
remaining characters not specified by "?"
or "#". For example, when the source is
"abcd" and "*" is used, then the target is
"abcd." When the target is "?#*," then the
result is "acd."
If Data Management encounters a "*"
within a segment, then anything specified
after the "*" is ignored other than the
"character" specified on the format.
<segment delimiter> The optional segment delimiter defines the
character that is used to delimit the segments
in the source and target member. For this rule
type, the source and target delimiter must be
the same. When the segment delimiter is not
specified, then the format mask is applied to
the entire member independent of any
segment specification or delimiter.
4-52
Chapter 4
Loading Data
Replacing Segments
You can use the format of the source member as the definition of the target member, but
replace some of the source segments rather than reuse the values from the source. For
example, you may have a requirement to filter the source by the value of the 4th segment,
replace the 7th segment with an explicit value, and then retain the values of the other
segments as in the following:
Source:
??????-??????-?-012000000-??????-???-???????-??????-??????-??????-???
Target:
??????-??????-?-012000000-??????-???-GROUP-??????-??????-??????-???
4-53
Chapter 4
Loading Data
Note:
If any other string operation is desired, use scripting.
4-54
Chapter 4
Loading Data
4-55
Chapter 4
Loading Data
Note:
If you add a minus sign in front of a target account value, then it is imported
with the "Change Sign" selected.
Column Mapping
100, Cash, 100, Explicit Mapping Explicit Mapping
100>199, Cash, R2, Between Mapping ">" indicates its BETWEEN mapping.
1*, Cash, R3, Like Mapping "*" indicates its LIKE mapping.
#MULTIDIM ACCOUNT=[4*] AND "#MULTIDIM" indicates a multiple dimension
UD3=[000],Cash,R4,Multi Dimension Mapping mapping. The actual column name used for
the mapping is the Data Table Column Name.
The easiest way to create a multiple dimension
mapping is to create a mapping through the
user interface and then export it to the file. You
can then modify the file by applying additional
mapping.
10, 20, In Mapping Source values are enclosed with " " and
separated by a comma (,) for the In mapping.
For example, IN 10, 20 is defined as "10,20" in
the source column of the import file.
4-56
Chapter 4
Loading Data
The mapping template also includes a macro script that pulls Oracle Hyperion Financial
Management dimensions directly from the target application to which you are connecting.
You must upload the Excel template to the Data Management server, and then pick the excel
file as the file to load in the data load rule, or when prompted by the system if the file name is
left blank. The system determines if the file being processed is an Excel file, and then reads
the required formatting to load the file.
When working with a mapping template in Excel:
• Do not have any blank lines in the map template.
• You can insert lines in the template, but you must keep new lines in the designated area.
• Each template supports a single dimension.
To download an Excel template:
1. On the Workflow tab, under Data Load, select Data Load Mapping.
2. Select the All Mapping tab.
3. In the Import drop-down, select Download Excel Template.
A Maploader.xls file is downloaded. Copy or save the file to your hard drive.
4. Open the Maploader.xls file.
5. Select the Map tab.
6. Enter the Location in cell B1, Location ID in cell B2, and select the dimension from the
Dimension drop-down in cell B3.
7. Complete the following column fields:
a. In Source, enter the source dimension value.
You can specify wildcards and ranges when entering the source dimension.
• Wildcards for unlimited characters—Use asterisks (*) to denote unlimited
characters. For example, enter 548* or *87.8.
• Wildcards for single character place holders—Use questions marks (?) to denote
single character place holders. For example,
– 548??98
– ??82???
– ??81*
• Range—Use commas (,) to denote ranges (no wildcard characters are allowed).
For example, specify a range as 10000,19999.
(This range evaluates all values from 10000 to 19999 inclusive of both start and
end values.)
In this case, Data Management considers all values from 10000 to 19999 to
include for both start and end values.
• In map—Use commas (,) to separate entries (no wildcard are characters
allowed). You must have at least three entries or the map shows as a between
map. For example, specify an In map as 10,20,30.
• Multi-Dimension map—Use #MULTIDIM to indicate its multidimensional
mapping. Enter the DIMENSION NAME=[VALUE] and the value. The value follows
the logic as wildcard, range, and In map. In the following example the search
4-57
Chapter 4
Loading Data
criteria are all accounts starting with 77 and UD1 = 240. For example,
#MULTIDIM ACCOUNT=[77*] AND UD1=[240].
b. In Source Description, enter a description of the source value.
c. In Target, enter the target dimension value.
d. In Change Sign, enter True to change the sign of the Account dimension.
Enter False to keep the sign of the Account dimension. This setting is only
used when mapping the Account dimension.
e. In Data Rule Name, enter the data rule name when the mapping applies to a
specific data rule name.
Note:
If you are adding an Explicit mapping, the rule name must equal the
source value.
Note:
If you are importing an Excel 2010 or 2016 file that has already been
exported, open the file before importing it. This step launches macros in
the Excel file that are required for the import process.
5. Optional: If necessary, click Upload to navigate to the file to import, and then click
OK.
The Select Import Mode and Validation screen is displayed.
4-58
Chapter 4
Loading Data
The mapping inherits the default data load rule, and shows the description: "System
Generated Mappings."
If you use Explicit mapping, the data rule name must equal the source value.
4-59
Chapter 4
Loading Data
Note:
To delete all mappings, select "Delete All Mappings."
4-60
Chapter 4
Loading Data
specified by a user for a period and category. Data load rules are defined for locations that
you have set up and are specific to locations.
You can create multiple data load rules for a target application so that you can import data
from multiple sources into a target application. Use the following high-level process to create
a data load rule:
1. Create the data load rule.
2. Define data load rule details.
3. Execute the data load rule.
Note:
Before you create data load rules, ensure that your source system data does not
include special characters in the target application.
Also avoid using special characters in names or spaces if you plan to use batch
scripts. Some of the characters may cause issues when run from a command line.
4-61
Chapter 4
Loading Data
enable support of additional General Ledger data sources where periods are
not defined by start and end dates.
6. Optional: Enter a description.
7. Select the source options.
8. From Target Plan Type, select the plan type of the target system.
Data Management currently supports data load that have up to six plan types.
Planning can support three custom plan types and up to four Planning Modules
applications (Workforce, Capex, Project, Financials). You can enable any
combination of these applications. When you create an Planning Modules
application and if you create more than two custom plan types, then you cannot
support a data load to all four applications.
9. For Planning and Essbase, select the Source Parameters tab, and specify any
parameters.
See Defining Source Parameters for Planning and Essbase.
To define source options:
1. On the Workflow tab, under Data Load, select Data Load Rule.
2. In Data Load Rule, select a data load rule or click Add.
3. Select the Source Options tab.
4. Optional: If you are working with a multi-column data load, select the Column
Headers tab, and specify the start date and end date of the numeric columns.
See Loading Multi-Column Numeric Data.
5. Optional: To work with target options, select the Target Options tab, and select
any options.
See the following:
a. For Planning application options, see Defining Application Options for
Planning.
b. For Financial Consolidation and Close, application options, see Defining
Application Options for Financial Consolidation and Close.
6. Optional: You can specify free form text or a value by selecting Custom Options
and specifying the text you want to associate with the data load rule.
See Creating Custom Options.
7. Click Save.
4-62
Chapter 4
Loading Data
The categories listed are those that you created in the Data Management setup, such as
"Actual." See Defining Category Mappings.
3. Optional: In Description, specify a description of the data load rule.
4. Optional: If the target system is a Planning application, from the Target Plan Type drop-
down, select the plan type of the target system.
Data Management currently supports data loads that have up to six plan types. Planning
can support three custom plan types and up to four Planning Modules applications
(Workforce, Capex, Project, Financials). You can enable any combination of these
applications. When you create an Planning Modules application and you create more
than two custom plan types, then you cannot support a data load to all four applications.
If the target system is Financial Consolidation and Close, from the Target Cube drop-
down, select the data load cube type.
Available options:
• Consol
• Rates
5. Optional: In Import Format, if the file type is a multiple period text file (with contiguous
periods, or noncontiguous periods), select the import format to use with the file so you
can override the import format. For example, specify an import format for single and
multiple period data rules, which enables you to load single or multiple period files from
the same location. In this case, the import format selected must have the same target as
the location selected in the POV. If the import format is unspecified, then the import
format from the location is used.
The starting and ending periods selected for the rule determine the specific periods in the
file when loading a multiple period text file.
In the file, when amounts are unavailable for contiguous periods, then you can explicitly
map the respective amount columns to required periods in the data rule in Data Load
Mapping. When you execute the rule, the data is loaded to the periods as specified by
the explicit mapping.
6. Optional: Enter a description.
7. If necessary, select the Source Options and add or change any dimensional data.
8. Click Save.
Note:
Financial Consolidation and Close note that Account dimension cannot be
concatenated with other dimensions as part of the import.
4-63
Chapter 4
Loading Data
2. In Data Load Rule, select a data load rule for a Planning and Essbase source,
and then click Add.
3. Select the Source Parameters tab.
4. (Planning only): In Data Extract Option, select the type of member data to
extract.
Members can be extracted depending on how they have been flagged for
calculation. For a member flagged as "stored," calculated data values are stored
with the member in the database after calculation. For a member tagged as
"dynamic calc," the member's data values are calculated upon retrieval.
Note:
The former name of Data Extract option was "Extract Dynamic
Calculated Data. "
Available options:
• All Data—Extracts stored values and dynamically calculated values for both
the Dense and Spare dimension.
The All Data option is always shown, but only work in the following cases:
– ASO Reporting applications
– Planning and Planning modules with Hybrid enabled
• Stored and Dynamic Calculated Data—Extracts stored dynamic calculated
values for the Dense dimension only and not Spare dimensions.
• Stored Data Only—Extracts stored data only. Dynamically calculated values
are excluded in this type of extract.
Note:
If you set the Extract Dynamic Calculated Data option on the Data
Load Rule screen to "Yes," and a leaf level member’s (Level 0) Data
Storage is set to "Dynamic," then the data is not picked up by the
extraction process. To pick up the data, set the member’s Data Storage
to something besides "Dynamic," to include the value in the selection
from the source application.
4-64
Chapter 4
Loading Data
Specify a value between 0 and 16. If no value is provided, the number of decimal
positions of the data to be exported is used, up to 16 positions, or a value determined by
Data Precision option if that value is specified.
This parameter is used with an emphasis on legibility; output data is in straight text
format. Regardless of the number of decimal positions in the data, the specified number
is output. Note that it is possible the data can lose accuracy, particularly if the data
ranges are from very large values to very small values, above and below the decimal
point.
By default, sixteen positions for numeric data are supported, including decimal positions.
If both the Data Precision and the Data Number of Decimal options are specified, the
Data Precision option is ignored.
7. Click Save.
4-65
Chapter 4
Loading Data
Note:
In Financial Consolidation and Close for YTD data loads, data is stored in
Periodic view. In this case, the user must select this option so that a "pre-
processing" is done to convert the YTD data from the file to periodic data for
loading purpose.
When you run a data load rule, you have several options:
Note:
When a data load rule is run for multiple periods, the export step occurs only
once for all periods.
• Import from Source—Data Management imports the data from the source
system, performs the necessary transformations, and exports the data to the Data
Management staging table.
Select this option only when:
– You are running a data load rule for the first time.
– Your data in the source system changed. For example, if you reviewed the
data in the staging table after the export, and it was necessary to modify data
in the source system.
In many cases, source system data may not change after you import the data from
the source the first time. In this case, it is not necessary to keep importing the data
if it has not changed.
When the source system data has changed, you need to recalculate the data.
Note:
Oracle E-Business Suite and source imports require a full refresh of data
load rules. The refresh only needs to be done once per chart of
accounts.
Note:
Select both options only when the data has changed in the source system
and to export the data directly to the target application.
4-66
Chapter 4
Loading Data
Note:
After you delete data load rules, you can delete a source system. After you execute
a deletion, users cannot drill through to an Enterprise Resource Planning (ERP)
source.
4-67
Chapter 4
Loading Data
4-68
Chapter 4
Loading Data
Workflow Grid
When you select a Workflow step, the following occurs:
Data Management uses fish icons to indicate the status of each step. When a Workflow step
is completed successfully, the fish is orange. If the step is unsuccessful, the fish is gray.
Processing Data
Click to navigate to the Process Detail page to monitor the ODI job progress.
5. Click OK.
4-69
Chapter 4
Loading Data
4-70
Chapter 4
Loading Data
Click to navigate to the Process Detail page to monitor the ODI job progress.
5. Click OK.
4-71
Chapter 4
Loading Data
Note:
When you run and open the check report from the Workbench, it is not saved
to the Data Management folder on the server.
4-72
Chapter 4
Loading Data
• Querying by Example
• Freezing Data
• Detaching Data
• Wrapping Text
Viewing Data
The View data provides multiple ways to view data including:
Table—Selects the source or target data to display in the grid:
• Source (All)—Shows both mapped and unmapped source dimensions (ENTITY,
ACCOUNT, UD1, UD2,… AMOUNT).
• Source (Mapped)—Shows only mapped source dimensions.
• Target—Shows only target dimensions (ENTITYX, ACCOUNTX, UD1X, UD2X,
….AMOUNTX).
• Source and Target—Shows both source and target dimensions (ENTITY, ENTITYX,
ACCOUNT, ACCOUNTX, UD1, UD1X, AMOUNT, AMOUNTX).
Columns—Selects the columns to display in the data:
• Show All
• Entity
• Account
• Version
• Product
• Department
• STAT
• Amount
• Source Amount
Freeze/Unfreeze—Locks a column in place and keeps it visible when you scroll the data grid.
The column heading must be selected to use the freeze option. To unfreeze a column, select
the column and from the shortcut menu, select Unfreeze.
Detach/Attach—Detaches columns from the data grid. Detached columns display in their
own window. To return to the default view, select View, and then click Attach or click Close.
Sort—Use to change the sort order of columns in ascending or descending order. A multiple
level sort (up to three levels and in ascending and descending order) is available by selecting
Sort, and then Advanced. From the Advanced Sort screen, select the primary "sort by"
column, and then the secondary "then by" column, and then the third "then by" column.
The search fields that are displayed in the advanced search options differ depending on what
artifact you are selecting.
Reorder Columns—Use to change the order of the columns. When you select this option,
the Reorder Columns screen is displayed. You can select a column, and then use the scroll
buttons on the right to change the column order.
Query by Example—Use to toggle the filter row. You can use the filter row to enter text to
filter the rows that are displayed for a specific column. You can enter text to filter on, if
4-73
Chapter 4
Loading Data
available, for a specific column, and then click Enter. To clear a filter, remove the text
to filter by in the text box, then click Enter. All text you enter is case sensitive.
Formatting Data
You can resize the width of a column by the number pixel characters or a percentage.
You can also wrap text for each cell automatically when text exceeds the column
width.
To resize the width of a column:
1. Select the column to resize.
2. From the table action bar, select Format, and then Resize.
3. In the first Width field, enter the value by which to resize.
You can select a column width from 1 to 1000.
4. In the second Width field, select pixel or percentage as the measure to resize by.
5. Select OK.
To wrap the text of a column:
1. Select the column with the text to wrap.
2. From the table action bar, select Format, and then Wrap.
Showing Data
You can select the type of data to display in the data grid including:
• Valid Data—Data that was mapped properly and is exported to the target
application.
• Invalid Data—One or more dimensions that was not mapped correctly and as a
result, the data is not exported to target.
• Ignored Data—User defined explicit map to ignore a source value when exporting
to target. This type of map is defined in the member mapping by assigning a
special target member with the value of ignore.
• All Data—Shows all valid, invalid and ignored data.
To show a type of data:
1. Select Show.
2. Select from one of the following:
• Valid Data
• Invalid Data
• Ignored Data
• All Data
4-74
Chapter 4
Loading Data
Note:
Exported data from Excel is exported either in a CSV (*.csv) or Excel (*.xls) file
format depending on the "Workbench Export to File Format" setting in System
Settings. The default file format for exports is CSV. For more information, see
Setting System-Level Profiles.
4-75
Chapter 4
Loading Data
Querying by Example
Use the Query by Example feature to filter rows that are displayed for a specific
column. You can enter text to filter on, if available, for a specific column, and then click
Enter. To clear a filter, remove the text to filter by in the text box, then click Enter. All
text you enter is case sensitive.
To query by example:
1. From the table action bar, click to enable the filter row.
The filter row must appear above the columns to use this feature.
2. Enter the text by which to filter the values in the column and click Enter.
Note:
When entering text to filter, the text or partial text you enter is case-
sensitive. The case must match exactly. For example, to find all target
applications prefixed with "HR", you cannot enter "Hr" or "hr".
Freezing Data
Use the Freeze feature to lock a column in place and keeps it visible when you scroll
the data grid.
To freeze a column:
1. Select the column to freeze.
Detaching Data
Use the Detach feature to detach columns from the data grid. When you detach the
grid, columns display in their own window. To return to the default view, select View,
and then click Attach or click Close.
To detach columns:
1. Select the column to detach.
4-76
Chapter 4
Loading Data
Wrapping Text
You can wrap text for each cell automatically when text exceeds the column width.
To wrap text for a column:
1. Select the column with the text to wrap.
2. Click .
Note:
Process Detail logs are purged every seven days. If you want to download log, use
EPMAutomate to download the log to a local folder. The command is downloadFile.
For example: epmautomate downloadfile "[FILE_PATH]/FILE_NAME". For more
information, see Working with EPM Automate for Oracle Enterprise Performance
Management Cloud
4-77
Chapter 4
Loading Data
Note:
The ODI Session number is present in Process Details only when
the data is processed during an offline execution.
4-78
Chapter 4
Loading Data
Note:
When entering text to filter, the text or partial text that you enter is case
sensitive. For example, to find all target applications prefixed with "HR", you
cannot enter "Hr" or "hr". For additional information on filtering, see Data
Management User Interface Elements.
The following template contains one line of metadata (row 1) and three lines of imported data
(rows 5–7).
Dimension Values and Amount should be populated in the respective columns as per the
Tags defined in row 1. To add additional dimension tags, add columns. Add data by adding
rows.
4-79
Chapter 4
Loading Data
When adding rows or columns, add them within the named region. Excel updates the
region definition automatically. If you add rows outside of the region, update the region
to include these new rows or columns. When adding dimension columns, add a
dimension tag to specify when the column is an account, entity, intercompany
transaction, amount or user defined (UD) dimension. Note that the entity dimension is
represented by the tag for "Center."
Table 4-16 Data Management dimension tags and the corresponding tags
In the template that is provided with Data Management, some of the rows are hidden.
To update the columns and the column tags, you need to unhide these rows. To do
this, select the row above and below the hidden rows, and then update the cell height.
A setting of 12.75 is the standard height for cells, which shows all hidden rows for the
selected range in the sheet. You can re-hide the rows after changes have been made.
4-80
Chapter 4
Loading Data
Note:
You only need to include a period key (for example, V1:2016/1/31) with the tag if the
periods are non-contiguous. If the periods are contiguous, then the period keys are
ignored, and the start/end periods selected when running the rule are used to define
the periods.
Note:
The Excel template expects an empty row between the tags and the first row of
data.
Note:
The import of mapping rules using an Excel template provides a place to specify a
mapping script.
4-81
Chapter 4
Loading Data
• Merge—Overwrites the data in the application with the data in the Excel data
load file.
• Replace-Clears values from dimensions in the Excel data load file and
replaces them with values in the existing file.
6. Click Validate to validate the mappings.
7. Click OK.
The mapping inherits the default data load rule and shows the description:
"System Generated Mappings."
E1,100,2016,Jan,USD,100
E2,100,2016,Jan,USD,200
E3,100,2016,Feb,USD,300
E4,100,2016,Feb,USD,400
In another example, if you select a Jan-March period range, and the file includes: Jan,
Feb, Mar, and Apr, then Data Management only loads Jan, Feb, and Mar.
E1,100,2016,Jan,USD,100
E2,100,2016,Jan,USD,200
E3,100,2016,Feb,USD,300
E4,100,2016,Feb,USD,400
E4,100,2016,Mar,USD,400
E4,100,2016,Mar,USD,400
E4,100,2016,Apr,USD,400
E4,100,2016,Apr,USD,400
4-82
Chapter 4
Loading Data
Data Management loads the periods specified on the Execute Rule screen, and ignores rows
in the file that don't match what you selected to load.
4-83
Chapter 4
Loading Data
i. In Field Number, enter the field number from the file to import (defaults to
the field number from the file when text is selected.)
j. In Expression, specify the expression to apply to the Period Number row.
5. Click Save.
6. Specify the parameters of the data load rule, and then execute it.
See Defining Data Load Rules to Extract Data.
4-84
Chapter 4
Loading Data
4. Select the Journal Label Row, and enter the journal ID from the journal template used
as the journal label when loading to the Financial Consolidation and Close in the
Expression field.
You cannot include the following characters in the journal label:
• period ( . )
• plus sign ( + )
• minus sign ( - )
• asterisk ( * )
• forward slash ( / )
• number sign ( # )
• comma ( , )
• semicolon ( ; )
• at symbol ( @ )
• double quotation mark ( " )
• curly brackets ( { } )
Additionally, you can specify the label; label and group; or no label or group in the
Expression field.
• To specify a label and group, add values in the format
LABEL=<Label>#GROUP=<Group> in the Expression field. For example, you might
enter LABEL=JE101#GROUP=CONSOLADJ.
• You cannot specify only a group (that is, without a label.)
• If the journal ID field is null, Data Management creates the journal label as
JL<loaded> automatically.
This is the process id. The journal name includes the process id, and you can tie it
back to Data Management if needed.
• For a multiple journal load, you can specify different journal labels in the file import.
5. Click Add and insert the Description Row twice.
6. In the Description 2 row, enter the journal description in the Expression field.
The journal description must be entered in the Description 2 row, so you must add the
description row 2 times, and then you can delete the Description 1 row if not needed.
4-85
Chapter 4
Loading Data
4-86
Chapter 4
Loading Data
4-87
Chapter 4
Loading Data
The default template is copied, and all the necessary custom dimensions are
added as columns before the Amount column. By default, two custom dimensions
are included in the template.
4. On the Open screen, open or save the template, and then click OK.
Metadata tags are required in a specific tabular format. The metadata row order is
important, but the column order is not. The first five rows (metadata header) of data
must contain the metadata tags for the table of data. The sample journal template
shown contains the metadata header (rows 1–5) and two lines of imported data (rows
6 and 7).
To define the completed template in Excel, you must create a range name that
includes all the metadata and the data value cells. The range name must begin with
the prefix "ups." For example, you can create a range name to define a standard
template and name it [upsStandardJV (A1 to D7)].
Metadata Structure
The metadata header (Row 1-5) instructs Data Management on how to find the
relevant segments of data that it handles in this template. The following Row 1-5 topics
explain how each piece of metadata is used by Data Management.
4-88
Chapter 4
Loading Data
Table 4-18 Financial Consolidation and Close Dimensions and Corresponding Tags
4-89
Chapter 4
Loading Data
To create your own journal template, you create a range name that includes all
metadata and data cells, and that begin with the prefix ups. For example, for a
standard template, create the range name [upsStandardJV (B16 to J33].
The following illustration depicts a journal template. Note that in this template, the
metadata are not in rows 1–5, but in rows 16–20. The template has an upsJournal
starting from row 16. Therefore, rows 16–20 are the first five rows in the upsJournal.
Rows 4–14 is a simple interface to assist users with creating the metadata header.
Metadata information is entered here and referenced by the metadata header.
(Enter journal data against the respective columns and by adding more rows within the
range. The easiest thing to do is to add rows to the existing range and just use a single
range, and use the default upsJournal. You add columns to the spreadsheet based on
the dimensionality of the target application.)
Loading Journals
To load a journal:
1. On the Workflow tab, under Data Load, select Data Load Workbench.
When you load a journal, Data Management uses the current POV to determine
location, category, and period. To use another POV, select another POV on the
Data Load Workbench.
2. Click Load Journal.
4-90
Chapter 4
Loading Data
3. On the Load Journal screen, to browse for a journal file, click Select.
a. Select a journal template to load from the server to which you uploaded one and click
OK.
When a journal has been successfully loaded, the Check button is enabled.
4-91
Chapter 4
Loading Data
Note:
When loading journals to an Financial Consolidation and Close
target from Data Management, consider that Data Management
(Cloud) determines the account types and converts the credits/
debits. All positive numbers are loaded as debits and all negative
numbers are loaded as credits. If you need to designate other credit
or debit signs for your account type, use the change sign feature in
Data Load Mappings or another customized method to handle
credits/debits changes for your journal loads.
When loading journals to an Financial Consolidation and Close
target from the Oracle Hyperion Financial Data Quality Management,
Enterprise Edition (onpremise), consider that Data Management
does not determine the account types or select the credits/debits. All
positive numbers are loaded as credits and all negative numbers are
loaded as debits. If you need to designate other credit or debit signs
for your account type, use the change sign feature in Data Load
Mappings or another customized method to handle credits/debits
changes for your journal loads.
b. Optional: To download a journal file, click Download and open or save the
journal file.
c. Optional: To upload a journal file, click Upload, then navigate to the file to
upload, and click OK.
4. Click Check to validate and load the journal.
See Checking Journals.
Checking Journals
Before journals can be posted, they must be checked. This process verifies whether
the POV entered in the Excel file for the journal matches the current POV. It also
ensures that the ups range is valid. If the validation is successful, the Post button is
enabled.
Note:
If the journal import file is not XLS or XLSX, then the check feature is not
available.
To check a journal:
1. Make sure that a successfully loaded journal file is in the File field.
The journal file must be an Excel (.xls) file type.
2. Click Check.
3. Select Online or Offline for the processing method.
Online checking runs immediately, and offline checking runs in the background.
4-92
Chapter 4
Loading Data
When a journal is checked, Data Management examines the journal file for all ranges
with names beginning with ups. It then examines and validates the metadata tags found
in each ups range. Data Management does not check metadata segments that include an
invalid range.
When Data Management validates the journal, you get the following message: "The
journal file checked successfully."
Posting Journals
After a journal has been validated (checked) successfully, you can post the journal. Posting a
journal appends or replaces the data displayed in the Import Format screen (as determined
by the load method specified in the journal).
To post the journal:
1. Select the journal.
2. Click Post.
When Data Management posts the journal, you get the following message: "The journal
file loaded successfully."
Tutorial Video
4-93
Chapter 4
Loading Data
Note:
Administrators can update the domain name that is presented to the
user, but Data Management requires the original domain name that
was provided when the customer signed up for the business process
(but not the domain "display" name. Alias domain names cannot be
used when setting up Oracle Enterprise Performance Management
Cloud connections from Data Management and from Financial Data
Quality Management, Enterprise Edition.
4-94
Chapter 4
Loading Data
"Vision" application, you might assign the Demo prefix to designate the target application
with a unique name. In this case, the Data Management joins the names to form the
name DemoVision.
7. Click OK.
8. Optional: Click Refresh Metadata to synchronize the application metadata from the
target application and display any new dimension.
Once the new dimensions are displayed, navigate to the Import Format option and map
any new dimensions to their source columns.
9. Optional: Click Refresh Members to synchronize members from the target dimension.
This feature enables you to see new members in a dimension for target members in a
mapping.
10. In Target Application, click Save.
4-95
Chapter 4
Data Load, Synchronization and Write Back
See Defining Locations and Defining Data Load Rules to Extract Data.
Overview
Oracle Enterprise Performance Management Cloud supports a variety of ways for
importing data from a range of financial data sources, and then transforming and
validating the data:
• Data Loading—Several types of sources are available for data loads:
– file-based applications
– Oracle General Ledger applications from the Oracle Financials Cloud
– Budgetary Control applications from the Oracle ERP Cloud
– Oracle NetSuite Save Search Results data sources
– Oracle Human Capital Management Cloud extracts from the Oracle HCM
Cloud
• Synchronization—move data between EPM Cloud and ASO Essbase cubes
created by Planning applications. That is, select an EPM application as an
integration source.
• Write back—In the Planning, you can write back budget data to a file-based
source system or Oracle General Ledger application in the Oracle ERP Cloud.
Watch this tutorial video to learn about extracting data from Oracle Planning and
Budgeting Cloud using Data Management.
Tutorial Video
4-96
Chapter 4
Data Load, Synchronization and Write Back
Data Synchronization
Data synchronization enables you to synchronize and map data between Oracle Enterprise
Performance Management Cloud source to target applications irrespective of the
dimensionality of the application simply by selecting the source and target EPM Cloud
application, and then mapping the data. Given the powerful mapping features already
available, the data can be easily transformed from one application to another application.
For example, use data synchromization to move data from:
• Planning input cubes to reporting cubes,
• Actuals from Financial Consolidation and Close to Planning reporting cube for variance
reporting.
Tasks enabled by the data synchronization:
• Create and modify synchronizations.
• Select source and target applications.
• Define mappings between sources and targets.
• Push data from Planning to Essbase ASO cubes created by Planning.
• Copy consolidated data from Essbase ASO cubes to Planning for future planning.
• Execute synchronizations.
• View logs of synchronization activities.
At a high level, the steps to synchronize data in Data Management include:
Note:
Make sure the EPM applications to be synchronized are registered as target
applications.
Note:
To make certain that Data Management loads periodic instead of year-to-date
(YTD data), you might have to hard code the "Periodic" Value dimension in the
import format.
4-97
Chapter 4
Data Load, Synchronization and Write Back
5. Execute—When the data rule is executed, data from the source EPM System is
extracted to a file. The data can be imported and processed using the data load
workflow process.
6. Export—Synchronizes the data.
4-98
Chapter 4
Data Load, Synchronization and Write Back
Note:
When specifying a period range, make sure the start and ending periods are within
a single fiscal year. When data ranges cross fiscal years, duplicate data results.
The source periods to be extracted are determined by the period mapping type.
Default Period Mapping
Default period mappings default to the list of source application periods using the application
or global period mappings based on the period key. The list of source periods is added as
Year and Period filters. For example, you can load data loading from Planning to Essbase.
Explicit Period Mapping
The Explicit method for loading data is used when the granularity of the source periods and
target application periods are not the same.
Note:
When an Essbase source dimension shares members across alternate
hierarchies, a Source Filter should be used to eliminate duplicates. For
example, if the Account dimension shares members across parallel
hierarchies headed by parent members Alt_Hier_1 and Alt_Hier_2, use the
following Source Filter function on Account to eliminate duplicates:
@Lvl0Descendants("Alt_Hier_2")
• Click to display the Member Select screen and select a member using the
member selector. Then, click OK.
The Member Selector dialog box is displayed. The member selector enables you to view
and select members within a dimension. Expand and collapse members within a
dimension using the [+] and [-].
4-99
Chapter 4
Data Load, Synchronization and Write Back
The Selector dialog box has two panes—all members in the dimension on the left
and selections on the right. The left pane, showing all members available in the
dimension, displays the member name and a short description, if available. The
right pane, showing selections, displays the member name and the selection type.
You can use the V button above each pane to change the columns in the member
selector.
You can also click Refresh Members to show the latest member list.
Note:
Assign filters for dimensions. If you do not assign filters, numbers from
the summary members are also retrieved.
and click .
c. To add special options for the member, click and select an option.
In the member options, "I" indicates inclusive. For example, "IChildren" adds
all children for the member, including the selected member, and
"IDescendants" adds all the descendants including the selected member. If
you select "Children", the selected member is not included and only its
children are included.
The member is moved to the right and displays the option you selected in the
Selection Type column. For example, "Descendants" displays in the Selection
Type column.
Tip:
4-100
Chapter 4
Data Load, Synchronization and Write Back
columns. The data file contains the header record with the list of dimensions in the order in
which they appear in the file. The file is created in the data folder with the name: EPM App
Name_PROCESS_ID.dat.
Note:
When a data load rule is run for multiple periods, the export step occurs only once
for all periods.
Data Import
The data import process imports the data file created during the extraction process. The
import process evaluates the import format based on the header record in the file and
mapping of the source to target dimension.
Write-Back
Financial budgeting information often must be compared with and controlled with actuals and
stored in the general ledger system. In Data Management, write-back functionality is
available with the Export step of the data load process. In this way both loading to the
Planning application and write-back to General Ledger are performed in as a single
consistent process.
4-101
Chapter 4
Data Load, Synchronization and Write Back
Other Considerations:
• Data load to write back is supported only for Planning applications, and Planning
applications created from Essbase ASO cubes.
• Data load rules to write back are not supported for EPMA deployed aggregate
storage Essbase cubes.
• Only monetary and statistical amounts can be written back.
• Allocation from a source amount to multiple target amounts is not provided.
• When specifying a period range, make sure the start and ending periods are within
a single fiscal year. When data ranges cross fiscal years, duplicate data results.
4-102
Chapter 4
Data Load, Synchronization and Write Back
or multiple period files from the same location. In this case, the import format selected
must have the same target as the location selected in the POV. If the import format is
unspecified, then the import format from the location is used.
The starting and ending periods selected for the rule determine the specific periods in the
file when loading a multiple period text file.
7. Optional: Enter a description.
8. Optional: Add or change any source filter options.
See Defining Source Filters.
9. Optional: Add or change any target options.
See Defining Application Options for Planning.
10. Define the source and target options.
Note:
For any dimensions not included in the source filter, Data Management
includes level zero members. However, it is possible to have an alternate
hierarchy in For Planning applications where a member that is a parent in
the base hierarchy, is also a level 0 member in a shared hierarchy.
• Click to select a member using the member selector, and then click Browse.
The Selector dialog box is displayed. The member selector enables you to view and
select members within a dimension. Expand and collapse members within a dimension
using the [+] and [-].
4-103
Chapter 4
Data Load, Synchronization and Write Back
The Selector dialog box has two panes—all members in the dimension on the left
and selections on the right. The left pane, showing all members available in the
dimension, displays the member name and a short description, if available. The
right pane, showing selections, displays the member name and the selection type.
You can use the Menu button above each pane to change the columns in the
member selector.
Note:
Assign filters for dimensions. If you do not assign filters, numbers from
the summary members are also retrieved.
click .
c. To add special options for the member, click , and then select an option.
In the member options, "I" indicates inclusive. For example, "IChildren" adds
all children for the member, including the selected member. If you select
"Children", the selected member is not included, only its children are included.
The member is moved to the right and displays the option you selected in the
Selection Type column. For example, "Descendants" displays in the Selection
Type column.
Tip:
4-104
Chapter 4
Data Load, Synchronization and Write Back
3. To load data from the source Planning, select Import from Source.
Select this option to review the information in a staging table, before exporting directly to
the target general ledger system.
When you select "Import from Source", Data Management imports the data from the
Planning target application, performs the necessary transformations, and exports the
data to the Data Management staging table.
4. Select Export to Target.
5. Click Run.
Exporting to Target
Use the Export to Target feature to export data to a target application, which is the Enterprise
Resource Planning (ERP) application. Select this option after you have reviewed the data in
the data grid and need to export it to the target application.
When exporting data for Planning, the following options are available:
• Store Data—Inserts the value from the source or file into the target application, replacing
any value that currently exists.
• Replace Data—Clears data for the Year, Period, Scenario, Version, and Entity
dimensions that you are loading, and then loads the data from the source or file. Note
when you have a year of data in your Planning application but are only loading a single
month, this option clears the entire year before performing the load.
• Add Data—Adds the value from the source or file to the value in the target application.
For example, when you have 100 in the source, and 200 in the target, then the result is
300.
• Subtract Data—Subtracts the value in the source or file from the value in the target
application. For example, when you have 300 in the target, and 100 in the source, then
the result is 200.
To submit the data load rule:
1. From the table action bar, in Data Rule, and choose the data load rule.
2. Click .
3. In Execution Mode, select the mode of exporting to the target.
Execution modes:
• online—ODI processes the data in sync mode (immediate processing).
• offline—ODI processes the data in async mode (runs in background).
Click to navigate to the Process Detail page to monitor the ODI job progress.
4. In Export, select the export method.
Export options:
• Current Dimension
• All Dimensions
• Export to Excel
4-105
Chapter 4
Logic Accounts
5. For Current Dimension and All Dimensions export methods, in Select file
location, navigate to the file to export, and then click OK.
For the Export to Excel method, mappings are exported to a Microsoft Excel
spreadsheet.
6. Click OK.
After you exported data to the target, the status of the export is shown in the
Status field for the data load rule in the Data Load Summary.
Logic Accounts
Related Topics
• Overview of Logic Accounts
• Creating a Logic Group
• Creating Accounts in a Simple Logic Group
• Creating Complex Logic Accounts
4-106
Chapter 4
Logic Accounts
Item
Specify the name of the logic account using the item field. The logic account that is named in
the item field is displayed in the Workbench grid as the source account. This same account
can be used as a source in a mapping rule. Oracle recommends that you prefix the names of
logic accounts with an "L" or some other character to indicate that an account came from a
source file, or was generated from a logic rule. Logic accounts can only be loaded to a target
application when they are mapped to a target account.
Description
The description that you enter in the Description field is displayed in the Account Description
field in the Workbench.
4-107
Chapter 4
Logic Accounts
• Between
• Like
• In
Table 4-19 Between Type field and example of the corresponding Criteria Value
Field values.
Like (Criteria Type)—Used when the source accounts in the Criteria Value field
contain wildcard characters. Use question marks (?) as placeholders and asterisks (*)
to signify indeterminate numbers of characters.
4-108
Chapter 4
Logic Accounts
Math Operator
Math Operators (+, -, x, /)—If a math operator is selected, then the new logic records has an
amount that equals the original amount is calculated with the specified Value/Expression. For
example, when the operator "x" was selected and 2 is entered in the Value/Expression field,
then the new record has an amount two times the original amount.
Use a numeric operator to perform simple mathematical calculations:
• NA (no operator)
• + (addition)
• - (subtraction)
• X (multiplication)
• / (division)
• Exp (expression operators)
• Function—see Function
In this example, one logic account is created because one Entity had a row meeting the
account criteria.
Exp
Use Expression operators to execute custom logic expressions, which are defined in the
Value/Expression field. Logic expressions, which cannot use variables or If statements, are
simpler than logic functions. Except for |CURVAL|, expressions do not have built-in
parameters. For expressions, you do not need to assign a value to RESULT.
4-109
Chapter 4
Logic Accounts
Expressions execute faster than logic functions. You can use the Data Management
Lookup function within expressions, as it is used within logic functions. To write a
custom expression, double-click the Value/Exp field to open the expression editor.
|CURVAL| + |810| + |238|
The function above uses the Data Management Lookup function to add two source
accounts to the value of the logic account. Notice that the CURVAL parameter can be
used within expressions, as it can within logic functions, except that, with expressions,
CURVAL must be enclosed in pipes.
(|CURVAL| + |000,10,09/30/01,810|) * 100
The function above uses the Data Management Lookup function to add a source
account (810) and a source account from a specified center, Data Management
category, and Data Management period to the value of the logic account, and then
multiplies the resulting sum by 100.
Function
Use function operators to execute a custom logic function defined in the Value/
Expression field.
To write a function, select Function from the Operator drop-down list in the Logic Item
line, and then click the edit icon to open the edit window. Logic functions are usually
used for conditional mapping and other complex operations that involve multiple
source accounts. Logic functions enable the use of Jython commands including
variables, if/elif/else statements, numeric functions, and other Jython constructs.
The logic function enables the use of predefined function parameters, and also
requires that you assign a value to the RESULT variable so that a value can be
updated for the newly created logic account. The following function parameters can be
used in a logic function, and these do not require using the "|" notation:
You can define function parameters in uppercase, lowercase, or mixed case letters.
However, the keyword RESULT must be in uppercase letters.
4-110
Chapter 4
Logic Accounts
if CURVAL > 0:
RESULT = CURVAL
else:
RESULT = "Skip"
Note:
You must use the Jython notation and indentation for the logic function.
The following function only assigns the result of the logic account calculation to the logic
account when "10" is the active Data Management category key.
if StrCatKey == "10":
RESULT = CURVAL
else:
RESULT="Skip"
4-111
Chapter 4
Logic Accounts
This function assigns the result of the logic account calculation to the logic account
only when the Criteria Account Entity is "000."
if StrCenter == "000":
else:
RESULT="Skip"
This function uses the Data Management Lookup function to add a source account
(810) to the value of the logic account if the current Data Management period is "Dec
2013".
if StrPerKey == "12/31/2013":
else:
RESULT="Skip"
This function uses the Data Management Lookup function to add another source
account from a different Entity, Data Management category, and Data Management
period to the value of the logic account when the active location is "Texas".
If StrLocation == "Texas":
else:
RESULT="Skip"
Value/Expression
To perform calculations and thereby, to derive values for a logic account, you select an
operator, from the Operator field, to work with the Value/Expression value.
4-112
Chapter 4
Logic Accounts
Seq
This field specifies the order in which the logic accounts are processed. Order specification
enables one logic account to be used by another logic account, provided that the dependent
account is processed first.
Export
A Yes-No switch determines whether a logic account is considered an export account and
therefore is subjected to the conversion table validation process. If the switch is set to Yes,
then you must map the logic account.
Criteria Value
To enter criteria for each dimension, click the Criteria Value icon to open the criteria form. The
logic item is created only from the source line items that meet the specified criteria for each
dimension. Descriptions of each complex logic criteria field is as follows:
Dimension
This field enables the selection of any enabled source dimension. You can select each
dimension only once.
4-113
Chapter 4
Logic Accounts
Criteria Type
This field works in conjunction with the Source Dimension and Criteria Value fields to
determine from which source values the logic items are derived. Criteria types
available are In, Between, and Like. The Criteria Type determines how the criteria
value is interpreted.
Criteria Value
The criteria type uses this field to determine the members to include in the logic
calculation for any given logic dimension.
Group By
When viewing the derived logic item in the Workbench, the Group By field enables the
logic item to override the displayed member in the appropriate dimensions field. You
can override to group the dimension based on the value entered in the Group By field.
Use this field to hard code the returned member, or append hard-coded values to the
original members by entering a hard-coded member and an asterisk (*) in the Group
By field.
For example, by placing the word "Cash" in the row with account selected for
dimension, the Import form displays "Cash" in the Account field for the logic item. If
you place "L-*" in the Group By field, the import form displays "L-1100" where 1100 is
the original account that passed the logic criteria.
If you enter no value in the Group By field, no grouping occurs for this dimension, and
a separate logic item is created for each unique dimension member.
Group Level
When viewing the logic item in the Workbench, the Group Level field works with the
Group By field to override the displayed member in the appropriate dimensions field.
This field accepts only numeric values.
When you enter a value of 3 in the Group Level field, the left three characters of the
Group By field are returned. If no value is entered in the Group By field, then when you
specify 3 in the Group Level field, first three characters of the original source
dimension member are returned. The logic items displayed on the Import form can be
grouped to the desired level.
For example, when you enter L-* in the Group By field, the logic item displays in the
Import form as "L-1100", where 1100 is the original account that passed. When viewing
the logic item in the Workbench, the Group Level field works with the Group By field to
override the displayed member in the appropriate dimensions field. This field accepts
only numeric values.
+ displays "L-11". If you enter the Group level1 for this row, then the Import form
displays "L-1".
Include Calc
If it meets the logic item criteria, the Include Calc field enables the logic item to include
previously calculated Data Management values in its calculations.
4-114
Chapter 4
Logic Accounts
Note:
Each logic item has a sequence attached, and the logic items are calculated in this
sequence. If the second, or later, logic item has this field enabled, then any
previously calculated logic items are included, provided they meet the logic criteria.
The first row specifies that any accounts that begin with "11" are included in the calculated
result for "Calc Item: CashTx".
The second row further qualifies the results by specifying that the source record must also
have the entity like "TX."
The third line reduces the results to only those source records that have an ICP value
between 00 and 09.
The last line reduces the results to only those source records that have a Custom 1 (UD1) of
either: 00, 01 or 02. Imported lines that do not meet the listed criteria are excluded from the
calculated results.
In the following table, only one new logic item is derived from multiple source records. Using
the preceding graphic example as the logic criteria, and the first grid that follows as the
source line items, you can see how Data Management derives the value of a single logic
item. Note the Group By field. Each Group By field includes a hard-coded value. Therefore,
for every line that passes the specified criteria, the original imported member is replaced with
the member listed in the Group By field.
4-115
Chapter 4
Logic Accounts
Data Management groups and summarizes the rows that include identical member
combinations and thus creates the following result:
Final Result
The first row in the preceding table specifies accounts that begin with "11" are to be
included in the calculated result for "Calc Item: CashTx".
The second row further qualifies the results by specifying that the source record must
also have the entity like "TX".
The third line reduces the results to only those source records that have an ICP value
between 000 and 100.
The last line reduces the results to only those source records that have a Custom 1
(UD1) of either: "00", "01". or "02". Any imported line that does not meet all listed
criteria is excluded from the calculated results.
In the following tables, two logic items are derived from the source records because of
the values entered in the Group By and Group Level fields. Two of the Group By fields
have hard-coded values listed and two have an asterisk. Therefore, for every line that
passes the specified criteria, the original imported members for the Account and Entity
dimensions are replaced with the member listed in the Group By field. The other
dimensions return all, or part of the original members based on the Group Level
entered.
4-116
Chapter 4
Check Rules
Logic Members
Data Management groups and summarizes the rows that include identical member
combinations and thus creates the following result.
Final Result
Check Rules
Use check rules to enforce data integrity.
4-117
Chapter 4
Check Rules
Data Management analyzes the check report and inserts a status entry in the process
monitoring table. The location associated with the report shows a status of True only
when all rules within the check report pass. For rules used only for warning, no rule
logic is assigned.
Check reports run as data is loaded. You can also run the reports manually.
Note:
Check rules are not applicable when loading to Accounts Reconciliation
Manager.
Note:
If the Entity dimension has shared hierarchies, then members must be
specified in parent.child format in the check entity group or data load
mappings for check rules to work with Financial Consolidation and Close and
Tax Reporting.
Note:
If you use the Simple Workflow mode to load data (see Using Workflow
Modes and you run a check rule with target intersections, then include a
check entity group (see Creating Check Entity Groups). Otherwise the check
rule fails. In addition, other than the Full workflow mode, no check reports are
not available after the export step has completed.
4-118
Chapter 4
Check Rules
2. Optional: In Check Rules, select the POV Location, POV Period, or POV Category.
See Using the POV Bar.
3. In the Check Rule Group summary grid, select the check rule group.
4. In the Rule Item details grid, click Add.
A row is added to the grid.
5. In each field, enter check rule information:
• Display Value—See Display Value.
• Description (optional)—See Description.
• Rule Name—See Rule Name.
• Rule Text—See Rule Text.
• Category—See Category.
• Sequence—See Sequence.
• Rule Logic (optional)
6. Click Save.
Example 4-1 Display Value
The Display Value field, which controls how Data Management formats the data rows of
check reports, is used to select target accounts or report format codes. For fields that contain
report format codes, no value lookup is attempted.
Example 4-2 Browse for Target Account
This option, which displays the Search and Select: Target Value screen, enables you to
search and insert a target account (from a list of target-system application accounts) into the
check rules form.
Example 4-3 Select Format Code
This option enables you to enter format codes into the Target Account column.
Format codes determine the display of check reports.
Table 4-31 Format Codes and Corresponding Actions Performed on Check Reports
4-119
Chapter 4
Check Rules
Table 4-31 (Cont.) Format Codes and Corresponding Actions Performed on Check
Reports
Out-of-Balance Account
Out-of-Balance Check
Rule Logic
The Rule Logic column is used to create multidimensional lookups and check rule
expressions. Rule Logic columns are processed for reports only in #ModeRule or
4-120
Chapter 4
Check Rules
#ModeList mode. After a rule logic is processed for a rule in the check report, Data
Management flags the rule as passing or failing.
In this example, the check rule expression returns true (OK) when the value of Cash (a target
account) plus $1000 is greater or equals the value of AccruedTax (another target account),
and false (Error) when it does not:
|,,,YTD,<Entity Currency>,,Cash,[ICP None],[None],[None],[None],
[None],,,,,,,,,,,,,,,,|+1000>=|,,,YTD,<Entity Currency>,,AccruedTax,[ICP None],
[None],[None],[None],[None],,,,,,,,,,,,,,,,|
4-121
Chapter 4
Check Rules
1. On the Setup tab, under Data Load Setup, select Check Rule Group.
2. From Check Rules, in the Check Rule Group summary grid, select a check rule
group.
3. From the Rule Item Details grid, click Add.
A row is added to the grid.
4. In each field, enter check rule information:
• Display Value—See Display Value.
• Description—(optional) See Description.
• Rule Name—See Rule Name.
• Rule Text—See Rule Text.
• Category—See Category.
• Sequence—See Sequence.
5. Click .
The Rule Logic screen includes three tabs:
• Rule Logic Add/Edit
• Rule Logic Add/Edit as Text
• Rule Logic Test Expression
4-122
Chapter 4
Check Rules
Note:
When using the equal sign for evaluating amounts, use double equal signs
(==).
4. Optional: Click .
5. From Rule Logic in the Intersection Type field, select the intersection type for the
multidimensional lookup.
Available intersection types:
• Source intersection—Values are enclosed by the "~" character.
• Converted source intersection—Values are enclosed by the ' character.
• Target intersection—Values are enclosed by the "|" character.
See Multidimensional Lookup.
6. From Dimension, select the dimension from which to retrieve values.
7. From Member Value, select a value from the dimension.
8. Click Add to Intersection.
The member value is added to the Display area.
9. Click OK.
4-123
Chapter 4
Check Rules
Display the Rule Logic Intersection screen by clicking from the Condition
Summary or Display summary grid on the Rule Logic Add/Edit screen.
The Rule Logic Intersection screen enables you to select the type of retrieval format
for the target dimensions.
Data Management uses the intersection type when multidimensional lookups are
selected for a rules logic statement. The multidimensional lookup retrieves account
values from the target system, Data Management source data, target data or Data
Management source converted data. See Multidimensional Lookup.
4-124
Chapter 4
Check Rules
Multidimensional Lookup
The multidimensional lookup retrieves account values from the target system, Data
Management source data, or Data Management converted data. You can use
multidimensional lookups in the rule condition and in the display of the rule logic.
Rule Data Sources
Data Management can retrieve data from three sources:
• Target-system data
• Data Management source data
• Data Management converted data
Target System Data
The following format, which begins and ends the rule with the pipe character (|), enables Data
Management to retrieve target-system values for any dimension.
Unless otherwise specified, parameters are optional.
The following examples illustrate ways that target-system values can be retrieved. In each
example, Balance is a target account. For dimensions that are not referenced, you must use
commas as placeholders.
Note the following:
• The Year dimension defaults to the year set in the POV.
• The Currency dimension defaults to 0.
• The View dimension defaults to YTD.
• The Value dimension defaults to <Entity Currency>.
Example 1
Look up the value of Balance for the target period and scenario (category) set in the POV and
for each entity of the Data Management check entity group that is assigned to the location.
The example rule passes the check when the target account is less than $10 and greater
than -10.
Example 2
Look up the value of Balance for the specified dimensions.
|Actual,March,2002,YTD,Ohio,Balance,Michigan,Engines,Ford,Trucks,
[None],,,,,,,,,,,,,,,,,USD| > 0
Example 3
4-125
Chapter 4
Check Rules
Look up the value of Balance for the specified dimensions and the previous period.
|Actual,-1,2002,YTD,Ohio,Balance,Michigan,Engines,Ford,Trucks,
[None],,,,,,,,,,,,,,,,,USD| > 0
Example 4
Look up the value of Balance for the target scenario (category) set in the Data
Management POV, the previous target period, and each entity of the Data
Management check entity group that is assigned to the location.
Example 1
The following shows how to use +n and -n to specify a relative offset in the check rule
when the current year dimension is "2015":
Example 2
The following shows how to use +n and -n to specify a relative offset in the check rule
when the current period dimension is "January":
4-126
Chapter 4
Check Rules
`FDMEE Category, FDMEE Period, Year (Field Not Applicable), FDMEE View,
FDMEE Location, Entity(Required), Account(Required), ICP, Custom1, Custom2,
Custom3, Custom4, Custom5, Custom6, Custom7, Custom8, Custom9, Custom10,
Custom11, Custom12, Custom13, Custom14, Custom15, Custom16, Custom17,
Custom18, Custom19, Custom20`
Math Operators
Math Operators (+, -, *, /)—If you select a math operator, then the check rule has an amount
that equals the original amount calculated with the specified expression. For example, when
you select the operator "*" and enter: 2 in the rule field, then the new record is an amount two
times the original amount. The math operators available in the expressions:
• + (addition)
• - (subtraction)
• * (multiplication)
• / (division)
• abs ()
If/Then/Else
Check rules accept If/Then/Else statements that enables you to create more complex
conditional tests on the Add/Edit as Text tab. This statement provides a primary path of
execution when the if statement evaluates to "true," and a secondary path of execution
when the if statement evaluates to "false."
Using the If/Then/Else statement, you can use custom-field values within reports as warning
messages and flags.
In the following example, when the Result is between 100 to 1500, the Check Report with
Warning prints "Amount between 100 and 1500." The example references three data
accounts:
1. 24000050: 1000
2. 24000055: 500
3. 24000060: 10
This calculation for this example is 1000 + 500/10, with the result of 1050.
The script is written using Jython code:
def runVal():
dbVal=abs((|,,,,,BERLIN,24000050,[ICP None],[None],[None],[None],
[None],,,,,,,,,,,,,,,,|)+(|,,,,,BERLIN,24000055,[ICP None],[None],[None],
4-127
Chapter 4
Check Rules
[None],[None],,,,,,,,,,,,,,,,|)/(|,,,,,BERLIN,24000060,[ICP None],
[None],[None],[None],[None],,,,,,,,,,,,,,,,|))
PstrCheckMessage1=''
msg2=''
msg3=''
if(dbVal<100):
RESULT=True
RESULT=True
RESULT=True
else:
RESULT=False
return [RESULT,PstrCheckMessage1,msg2,msg3]
4-128
Chapter 4
Check Rules
Note:
You must include three message parameters with the return statement to write data
to the status table. If you write only a single message, two other message
parameters are required.
The result of running this script is shown in the Check Report with Warnings:
4-129
Chapter 4
Check Rules
1. On the Rule Logic Editor, select the Rule Logic Add/Edit as Text tab.
2. In Rule, enter the rule.
Do not use a semicolon (;) in check rules. The semicolon is a reserved as the
separator between the rule value and the display value.
When using the equal sign for evaluating amounts, use double equal signs (==)
instead of the single equal sign (=). For example, use a – b == 0, and not a - b=0.
3. Click OK.
Note:
The POV can only be set after data was exported to the application for a
specific POV. Then you can enter the POV and run the rule being tested.
The POV entered remains set for the current session. You can navigate
to the workbench and return without having to reset the POV.
• Test Condition and Test Display—Buttons that are used to run, respectively, the
expression in the Condition or Display area on the Rule tab
4-130
Chapter 4
Check Rules
[DimensionMember].[Ancestors...].[DuplicateMember]
4-131
Chapter 4
Check Rules
For example:
[Market].[East].[State].[New York]
[Market].[East].[City].[New York]
4-132
Chapter 4
Check Rules
Option Description
Entity Specify the target entity to consolidate and display
in the check report. If the Consolidate option is
selected, the entity is consolidated before it is
displayed in the check report.
Consolidate Select to consolidate an entity before displaying it
in the check report.
Data Management also runs a consolidation after
loading the target system (assuming a check entity
group is assigned to the location). The
consolidated entities are specified in the check
entity group that is assigned to the active location.
Planning—Runs the default calculation specified in
the Calc Script Name.
Essbase—Runs the default calculation specified in
the Calc Script Name depending on the "Check
Entity Calculation Method" property of the target
application. (Note that a calculation script can be
run before or after a check rule is run.)
On Report The option selected in the On Report column
determines whether an entity is displayed in the
check report. If On Report is not selected and
Consolidate is selected, the entity is consolidated
but not displayed.
Sequence Specify the order in which entities are
consolidated and displayed in the check report.
It is good practice to increment the sequence
number by 10, to provide a range for the insertion
of entities.
4-133
5
Batch Processing
Using the Data Management batch processing feature, you can:
• Combine one or more load rules in a batch and execute it at one time.
• Run jobs in a batch in serial or parallel mode.
• Define the parameters of the batch.
• Derive the period parameters based on POV settings.
• Create a "master" batch that includes multiple batches with different parameters.
For example, you can have one batch for data rules run in serial mode, and a second
batch for the data rule run in parallel mode.
• Associate a batch with a batch group for ease of use and security.
• Instruct the batch to submit included jobs in parallel mode and return control.
• Instruct the batch to submit included jobs in parallel mode and return control only when
all jobs are complete.
• When working with metadata batch definitions, you can create a batch that includes data
load rules from different target applications. This is helpful when creating a batch from
which to load data and metadata. (Metadata in this case, is loaded from a flat file.)
You can also create a batch of batches or "master" batch with one batch for metadata
and another batch for data. With this method, you do not need to select a target
application name, but note that you cannot migrate a master batch without one.
For more information, see Working with Metadata Batch Definitions.
Batch processing options are available on the Data Management task pane, or by executing
batch scripts.
If you process batches from the Data Management task pane, use the Batch Definition option
to create a batch, and specify its parameters and tasks included in the batch. See Working
with Batch Definitions. Use the Batch Execution option to execute batches. See Executing
Batches.
5-1
Chapter 5
Working with Batch Definitions
Note:
Only an administrator can create batch definitions.
You can create a batch definition that includes data load rules from a different target
applications. This enables you to use a batch that loads both metadata and data, or to
create a batch of batches with one batch for metadata and another batch for data.
If you want to work with data load rules that have been associated with a metadata
application, Data Management supports the loading of metadata from a flat file. For
more information, see
The Batch Definition features consist of three regions:
• Batch Definition detail—Enables you to add and delete a batch definition. If adding
or modifying a definition, specify the definition name, target application, process
method, return control method, and wait parameters.
• Batch Definition parameters—Enables you to derive period parameters based on
the Import to Source, Export to Target, POV period, and to indicate data extract
parameters. The parameter definition is unavailable for the batch types "batch."
• Batch Definition jobs—Enables you to add and delete jobs in a batch. Based on
the type of batch, specific types of rules are allowed.
To add a batch definition:
1. On the Setup tab, under Batch, select Batch Definition.
2. In the Batch Definition summary section, click Add.
Use the blank Name and Target Application fields in the Batch Definition summary
grid to enter a batch name or target application on which to search.
3. In Batch Definition detail section, select the Definition tab.
4. In Name, specify the name of the batch definition.
The name must contain only alpha, numeric or underscore characters. Do not
enter spaces or any other character.
5. From Target Application, select the name of the target application.
6. From Type, select the type of rule for the definition.
Available types are:
• data
• batch
• open batch
• open batch multi-Period file-based data sources that include starting and
ending periods.
If you are including multiple target applications, make sure the "type" of rule is
consistent by type. For example, a batch of type "batch" cannot include a data
rule. It can include only batches. A batch of type "data" cannot include batches.
The Open Batch type is used only for file-based data sources and does not
contain any batch jobs. When you run this type of batch, the process reads the
5-2
Chapter 5
Working with Batch Definitions
files automatically from the openbatch directory and imports them into the appropriate
POV based on the file name. When the open batch is run, the master folder is emptied.
7. From Execution Mode, select the batch process method.
• Serial—Processes files sequentially, requiring that one file complete its process
before the next file starts its process.
• Parallel—Processes files simultaneously.
Note:
Files are not grouped by location in parallel mode.
8. For batch processing run in parallel mode, complete the following fields
• Wait for Completion—Select Wait to return control only when the batch has finished
processed.
Select No Wait to run the batch in the background. In this case, control is returned
immediately.
• Timeout—Specify the maximum time the job can run. Data Management waits for
the job to complete before returning control.
The Timeout can be in seconds or minutes. Enter a number followed by a S for
seconds or M for minutes.
9. In Open Batch Directory for an open batch type, specify the folder under
Home\inbox\batches openbatch where the files to be imported are copied. If this field is
blank or null, all files under Home\inbox\batches\openbatch are processed.
10. In File Name Separator for an open batch, select the character to use when separating
the five segments of an open batch file name.
Options:
• ~
• @
• ;
• _
11. Select Auto Create Data Rule to create the data rule automatically for file-based data
loads.
Note:
The Auto Create Data Rule option is available when the rule type is "open
batch".
When Data Management assigns the data rule name, it checks whether a data rule with
the name "Location_Category" exists. If this name does not exist, Data Management
creates the data rule using the following file naming conventions:
• Rule Name—Location_Category
• Description—"Auto created data rule"
5-3
Chapter 5
Working with Batch Definitions
• Category—Category
• File Name—Null
• Mode—Replace
12. Optional: In the Description field, enter a description of the batch definition.
14. Optional: In Batch Group, select the batch group to associate with the batch.
5-4
Chapter 5
Working with Batch Definitions
If the Start Period and End Period fields are selected, the POV Period field is disabled.
This field is only available for a data load batch.
7. In the Import Mode drop-down, select the mode to extract data all at once for an entire
period or incrementally during the period.
Options are:
• Append—Existing rows for the POV remain the same, but new rows are appended to
the POV. For example, a first-time load has 100 rows and second load has 50 rows.
In this case, Data Management appends the 50 rows. After this load, the row total for
the POV is 150.
• Replace— Replaces the rows in the POV with the rows in the load file (that is,
replaces the rows in TDATASEG). For example, a first time load has 100 rows, and a
second load has 70 rows. In this case, Data Management first removes the 100 rows,
and loads the 70 rows to TDATASSEG. After this load, the row total for the POV is
70.
For a Planning application, Replace Data clears data for the Year, Period, Scenario,
Version, and Entity dimensions that you are loading, and then loads the data from the
source or file. Note when you have a year of data in your Planning application but are
only loading a single month, this option clears the entire year before performing the
load.
This field is only available for a data load batch.
8. Select Extract Exchange Rate to extract the exchange rate. (This option is not
applicable for file-based source systems).
9. In the Export Mode drop-down, select the mode of exporting data.
For Planning applications, in the Export Mode drop-down, select the mode of exporting
data.
Options are:
• Store Data—Inserts the value from the source or file into the target application,
replacing any value that currently exists.
• Replace Data—Clears data for the Year, Period, Scenario, Version, and Entity
dimensions that you are loading, and then loads the data from the source or file. Note
when you have a year of data in your Planning application but are only loading a
single month, this option clears the entire year before performing the load.
• Add Data—Adds the value from the source or file to the value in the target
application. For example, when you have 100 in the source, and 200 in the target,
then the result is 300.
• Subtract Data—Subtracts the value in the source or file from the value in the target
application. For example, when you have 300 in the target, and 100 in the source,
then the result is 200.
This field is only available for a data load batch.
For Financial Consolidation and Close, the modes of exporting data are:
• Replace—First deletes all values based on the scenario, year, period, entity, and data
source before it submits the load.
• Merge—If data already existed in the application, the system simply adds values from
the load file to the existing data. No existing data is deleted. If data does not exist,
the new data is created.
5-5
Chapter 5
Adding a Batch Group
• Accumulate—Accumulate the data in the application with the data in the load
file. For each unique point of view in the data file, the value from the load file is
added to the value in the application.
10. For Essbase or Planning, from the Plan Type drop-down, select the plan type of
the application.
11. Click Save.
5-6
Chapter 5
Using Open Batches
The high-level process overview of the Open Batches function consists of:
1. In Batch Definition, add a new batch definition with the type of Open Batch.
2. Create an openbatch folder in the application inbox\batches subdirectory where the files
to be imported are copied.
After a batch is processed, a directory is created and all files within the OpenBatch
directory are moved into it. The new directory is assigned a unique batch ID.
3. Select the File Name Separator character.
This character is used to separate the five segments of an open batch file name.
4. Select the Auto Create Data Rule option.
5. Stage the open batch files by copying files to the inbox\batches\openbatch folder using
the name format for batch files.
6. In Batch Execution, process the batch.
5-7
Chapter 5
Using Open Batches
When Data Management assigns the data rule name, it checks whether a data
rule with the "Location_Category" name exists. If the name does not exist, Data
Management creates the data rule.
To use predefined data rules that load data based on specific categories, leave
this field blank.
11. Optional: In the Description field, enter a description of the batch definition.
5-8
Chapter 5
Using Open Batches
16. In the Batch Execution summary area, select the open batch file, and then click
Execute.
After an open batch is processed, a directory is created, all files within the openbatch
directory are moved into the new directory, and the new directory is assigned a unique
batch ID.
Note:
The Open batch feature is unavailable for the Account Reconciliation Manager.
5-9
Chapter 5
Using Open Batches
and
b_TexasDR1_ Jan-2004_ Jun-2004_RR.txt (Data Rule, Start Period, End
Period)
5-10
Chapter 5
Using Open Batches
11. Optional: In the Description field, enter a description of the batch definition.
13. Stage the file-based data source files by copying them to inbox\batches\openbatch
using one of the following methods:
• Predefined Data Load Rule—To use a predefined data rule that loads data based on
specific categories, leave the Auto Create Data Rule field blank on the Batch
Definition screen and create the data load rule (see Defining Data Load Rules to
Extract Data).
If you must load to noncontiguous periods in the open batch, create the data rule in
which the source period mappings are defined, and use this option.
Next, create the open batch file name using the following format:
FileID_RuleName_Period_LoadMethod. The file id is a free-form field that you can
use to control the load order. Batch files load in alphabetic order by file name.
The load method is defined using two-character code identifying the load method
where the first code represents the append or replace method for the source load,
and second character represents the accumulate or replace method for the target
load.
For the source load method, available values are:
– A—Append
– R—Replace
For the target load method, available values are:
– A—Accumulate
– R—Replace
Examples of an open batch file name are: a_Texas_Actual04_Jan-2004_RR.txt and
b_Texas_Actual04_Jan-2004_RR.txt
• Auto-Created Data Load Rule—To load data to any location category and have Data
Management create the data load rule automatically, create the open batch file name
using the format: "FileID_Location_Category_Period_LoadMethod".
In this case, Data Management looks for the data rule with the name
"Location_Category". If it does not exist, Data Management creates the data rule
automatically with the name "Location_Category".
An auto-create data rule is only applicable for contiguous period loads. To load to
noncontiguous periods, create the data rule in which the source period mappings are
defined.
14. Optional: Apply any scheduling conditions to the open batch file.
16. In the Batch Execution summary area, select an open batch file, and then click
Execute.
After an open batch is processed, a directory is created and all files within the openbatch
directory are moved to it. The new directory is assigned a unique batch ID.
5-11
Chapter 5
Executing Batches
Note:
The Open batch feature is unavailable for the Account Reconciliation
Manager.
Executing Batches
Use the Batch Execution feature to show all batches that you have accessed based on
the batch group assigned. You can also use the Batch Execution feature to select a
batch and execute a rule after parameters passed with the rule have been validated.
Batch Execution shows all batches to which you have access based on the batch
group assigned.
Note:
The Batch Execution option is only accessible to a user with a Run
Integration role.
To execute a rule:
1. On the Workflow tab, under Other, select Batch Execution.
2. In the Batch Execution summary area, select a batch name, and then click
Execute.
3. Optional: You can also schedule a job by clicking Schedule (see Scheduling
Jobs). You can check the status of the batch by clicking Check Status (see
Viewing Process Details).
Scheduling Jobs
The scheduling jobs feature provides a method to orchestrate the execution times of
data load rules.
Note:
When you cancel a job from the Data Management user interface using
Cancel Schedule, all instances of a schedule for a rule are cancelled. You
cannot selectively cancel individual schedules for a rule.
5-12
Chapter 5
Scheduling Jobs
To schedule a job:
1. From the Batch Execution screen or Data Load Rule screen, select the batch name
(from the Batch Execution or Data Load screen) and click Schedule.
2. In Schedule, select any rule feature specific options.
For example, if you select the Schedule option from the Data Load Rule screen, specify
the Import from Source, Recalculate, Export to Target, and so on options.
3. Specify the type of scheduling and select the associated date and time parameters.
See a.
4. Click OK.
5-13
Chapter 5
Scheduling Jobs
Note:
The Timezone option
is unavailable for the
Monthly (week day)
schedule type.
5-14
Chapter 5
Scheduling Jobs
5-15
6
Data Management Reports
Data Management provides prebuilt reports that capture business-critical operations and
revenue-generating activities within your organization. These reports provide key information
on how data is integrated from the Enterprise Resource Planning (ERP) source system into
the Oracle Enterprise Performance Management Cloud target application.
The Data Management reporting framework enables you to adjust report group assignments,
add or remove reports from report groups and control report security.
6-1
Chapter 6
Data Management Reports
Note:
The Map Monitor
reports do not
capture historical
data earlier than
release
11.1.2.4.100.
Map Monitor
reports are
enabled only if
the Enable Map
Audit is set to
"Yes" in System
Settings.
Base Trial Balance Reports The base Trial Balance reports represent
account balance source data in a General
Ledger system. You use a base Trial Balance
report to validate and compare balances as
data is loaded from the source system to the
target applications.
The subcategories of base Trial Balance
reports:
• Trial Balance Location, With Targets (Cat,
Per)
• Trial Balance Current Location, With
Rules (Cat, Per)
• Trial Balance Current Location, All
Dimensions-Target Entity-Acct (Cat, Per)
• Trial Balance Converted Current Location,
By Target Entity-Acct (Cat, Per)
• Trial Balance Current Location, with
Target Entity-Acct (Cat, Per)
• Trial Balance Current Location, All
Dimension-Targets (Cat, Per)
• Trial Balance Current Location, by Target
Acct (Cat, Per)
6-2
Chapter 6
Data Management Reports
6-3
Chapter 6
Working with Report Definitions
6. Click Save.
6-4
Chapter 6
Running Reports
To search on a report group, click and choose a report group from the Search and
Select: Group screen.
Report groups are created on the Report Group tab. See Adding Report Groups.
6. Click Save.
To copy a report:
1. On the Setup tab, under Reports, select Report Definition.
2. In Report Definition, in the Report summary grid, select the report.
3. In the Report summary grid, click Copy Current Report.
The copied report is added to the list of reports. The name of the report takes the original
report name appended with "_copy."
Running Reports
To run reports:
1. On the Workflow tab, under Other, select Report Execution.
2. In Report Execution, in Report Groups, select a report group.
3. In Reports, select a report.
To filter the display listing by a report name within a report group, enter the name of the
report in the blank entry line above the Name field and press Enter. For example, to view
only reports beginning with Account, enter Account and press Enter.
To filter the display listing by a base query name within a report group, enter the query
name in the blank entry line above Query.
4. Click Execute.
5. When prompted, enter parameter values on the Generate Report screen.
6-5
Chapter 6
Data Management Detail Reports
Audit Reports
An audit report displays all transactions for all locations that compose the balance of a
target account. The data returned in this report depends on the location security
assigned to the user.
Runs for
All Data Management locations
Parameters
Target account, Period, Category
Query
Account Chase Wildcard
Template
Account Chase WildCard.rtf
Runs for
All Data Management locations
6-6
Chapter 6
Data Management Detail Reports
Parameters
Target account, Period, Category
Query
Account Chase Freeform
Template
Account Chase Free Form.rtf
Note:
The Map Monitor reports do not capture historical data earlier than release
11.1.2.4.100.
Map Monitor reports are enabled only if the Enable Map Audit is set to "Yes" in
System Settings.
Runs for
All Data Management locations
Parameters
Location, Start Date, End Date
Query
Dimension Map Query
Template
Dimension Map for POV.rtf
6-7
Chapter 6
Data Management Detail Reports
Note:
The Map Monitor reports do not capture historical data earlier than release
11.1.2.4.100.
Map Monitor reports are enabled only if the Enable Map Audit is set to "Yes"
in System Settings.
Runs for
All Data Management locations
Parameters
User name, Start Date, End Date
Query
Dimension Map for POV
Template
Dimension Map for POV.rtf
Runs for
Current Data Management location
Parameters
Period, Category
Query
Intersection Drill Down Query
Template
Intersection Drill Down.rtf
6-8
Chapter 6
Data Management Detail Reports
Check Reports
Check reports provide information on the issues encountered when data load rules are run.
Note that Check reports return target system values that include aggregation or calculations
from the target system.
Note the following when using check reports:
• When the check report is run and opened from the Workbench, it is not saved to the Data
Management folder on the server.
• When you run a data rule, a check rule report is not generated automatically. In this case,
run the data rule before executing the check report.
• If you run the report in offline mode, the report is saved to the outbox on the Data
Management server.
• To run a data rule and report in batch mode, run the data load rule from a BAT file, and
then the report from a BAT file. In this case, you can put each in the same BAT file, or call
each of them from a BAT file.
Check Report
Shows the results of the validation rules for the current location (indicates pass or fail status).
Runs for
Current Data Management location
Parameters
Period, Location and Category
Query
Check Report
Template
Check Report.rtf
Runs for
Current Data Management location
Parameters
Category, Start Period, End Period
Query
Check Report Within Period Query
6-9
Chapter 6
Data Management Detail Reports
Template
Check Report With Period Range.rtf
Runs for
Current Data Management location
Parameters
None
Query
Check Report With Warning
Template
Check Report With Warning.rtf
Runs for
Current Data Management location
Parameters
None
Query
Check Report By Validation Entity
Template
Check Report By Validation Entity Sequence.rtf
6-10
Chapter 6
Data Management Detail Reports
Runs for
Current Data Management location
Parameters
Category, Period
Query
Current Trial Balance With Location with Targets
Template
TB Location With Targets.rtf
Runs for
Current Data Management location
Parameters
Category, Period
Query
TB Location With Query
Template
TB Location with Rules.rtf
Runs for
Current Data Management location
Parameters
Category, Period
Query
Trial Balance Current Location with Targets
6-11
Chapter 6
Data Management Detail Reports
Template
TB/(All Dimensions with Targets) by Target Entity Account.rtf
Runs for
Current Data Management location
Parameters
Category, Period
Query
Trial Balance Location All Dimension.
Template
TB with Transaction Currency.rtf
Runs for
Current Data Management location
Parameters
Category, Period
Query
Trial Balance Current Location Sorted By Target Account
Template
TB With Target Account.rtf
Runs for
Current Data Management location
Parameters
Category, Period
6-12
Chapter 6
Data Management Detail Reports
Query
Trial Balance Base Transaction Currency
Template
Base Trial Balance (All Dimensions with Targets).rtf
Runs for
Current Data Management location
Parameters
Category, Period
Query
Trial Balance Converted by Target Entity/Account Query
Template
TB Converted Current Location by Target Entity Account.rtf
Listing Reports
Listing reports summarize data and settings (such as the import format, or check rule) by the
current location.
Runs for
N/A
Parameters
None
Query
Import Format By Location
Template
Import Format by Location.rtf
6-13
Chapter 6
Data Management Detail Reports
Location Listing
Shows a list of all mapping rules for a selected period, category, or dimension.
Runs for
Current Data Management location
Parameters
Any Data Management Dimension, Period, Category
Query
Location Listing Query
Template
Location Listing.rtf
Location Analysis
Location Analysis reports provide dimension mapping by the current location.
Runs for
Current Data Management location
Parameters
Current Data Management dimension
Query
Dimension Map
Template
Dimension Map.rtf
Runs for
Current Data Management location
Parameters
Any Data Management Dimension, Period, Category
6-14
Chapter 6
Data Management Detail Reports
Query
Dimension Map for POV
Template
Dimension Map.rtf
Runs for
All Data Management locations
Parameters
Category, Period
Query
Process Monitor
Template
Process Monitor.rtf
Runs for
All Data Management locations, period range
Parameters
Category, Start Period, End Period
Query
PMPeriodRange
Template
PMPeriodRange.rtf
6-15
Chapter 6
Data Management Detail Reports
Runs for
All Data Management categories and locations
Parameters
Period
source
Query
Process Monitor All Categories
Template
Process Monitor All Category.rtf
Variance Reports
The Variance reports display source and trial balance accounts for one target account,
showing data over two periods or categories.
Runs for
All Data Management locations
Parameters
Target Account, Category 1, Period 1, Category 2, Period 2.
Query
Account Chase Variance
Template
Account Chase Variance.rtf
6-16
Chapter 6
Data Management Detail Reports
Runs for
Current Data Management location
Parameters
Category 1, Period 1, Category 2, Period 2
Query
Trial Balance Variance
Template
TB Variance.rtf
6-17
7
System Maintenance Tasks
You can run system processes to maintain and clean up all runtime artifacts, such as the
Process tables, Staging tables or inbox/outbox folders. Often the tables and folders contain
vast amounts of data, which you may no longer need. With the System Maintenance Tasks
feature, you can purge standard tables and folder by scheduling system processes and
executing them.
Note:
All applications not assigned to a folder are purged when a single application is
selected for a purge. The default application folder is generic, and the purge script
focuses on the folder in which the selected application resides. In this case if you
want to prevent an application from being purged, save it to an independent folder.
To facilitate the use of the Purge Scripts, Data Management provides the following:
• A set of custom scripts is shipped to the bin/system directory.
The scripts include:
– Delete Integration (DeleteIntegration.py)
– Maintain Application Folder Export Setup Table Data (TableExport.py)
– List Table Rowcount (TableRowCount.py)
– Maintain Application Folder (MaintainApplication Folder.py)
– Maintain Data Table by Location (MaintainFDMEEDatatables.py)
– Maintain Data Table by Application (MaintainFDMEEDatatables.py)
– Maintain Process Table (MaintainProcessTables.py)
– Maintain Setup Table (MaintainSetupData.py)
– Purge All Imported Data (PurgeAllData.py)
– Upgrade Custom Applications (UpgradeCustomApp.py)
• Scripts are registered as system scripts in script registration.
• Script are registered as part of installation with QUERYID = 0 and APPLICATIONID = 0.
• The script group "System" is created and system scripts are assigned to it.
• Script execution displays system scripts when the user has access irrespective of the
target application in the POV.
• The ODI process executes the scripts from the bin/system directory instead of the data/
scripts/custom directory.
7-1
Chapter 7
Deleting Integration
Deleting Integration
You can delete an integration including the name, import format, location, mappings
and any data rules created in Data Integration. This option enables you to delete an
entire integration without having to delete individual components.
1. On the Workflow tab, under System Maintenance Tasks, select Delete
Integration.
2. From the LOV bar, select the location associated with the integration.
3. From the Execute Script screen, and then in Value, enter the name of the
integration to delete and click OK.
Optionally, you can click Schedule to schedule the integration to be deleted. For
information on scheduling jobs, see Scheduling Jobs.
7-2
Chapter 7
List Table Rowcount
7-3
Chapter 7
Maintain Application Folder
• AIF_BALANCE_RULES
• AIF_BAL_RULE_PARAMS
• AIF_LCM_ARTIFACTS_V
• AIF_SOURCE_SYSTEMS
• AIF_TARGET_APPLICATIONS
• AIF_TARGET_APPL_DIMENSIONS
• AIF_TARGET_APPL_PROPERTIES
• TPOVPARTITION
• TBHVIMPGROUP
• TBHVIMPITEMFILE
• TPOVPERIOD
• TPOVPERIODADAPTOR
• TPOVPERIODSOURCE
• TPOVCATEGORY
• TPOVCATEGORYADAPTOR
To execute the export setup table data script:
1. On the Workflow tab, under System Maintenance Tasks, select Export Setup
Table Data.
2. From the Execute Script screen, click OK.
Note:
To delete all locations for an application, use the Maintain Data Tables by
Application option. This option enables you to delete data across all locations
associated with a selected target application. See Maintain Data Table by
Application for more information.
7-4
Chapter 7
Maintain Data Table by Application
Maintain Setup Data uses a Mode parameter, which enables you to either preview or delete
invalid data.
The parameters are:
• Location
• Start Period
• End Period
• Category
To maintain the data by location:
1. On the Workflow tab, under System Maintenance Tasks, select Maintain Data Table
by Location.
2. Click Execute.
3. From Execute Script, in Location, select the location from which to delete data.
To delete data from all locations, leave the Location field blank.
4. From Start Period, select the starting period from which to delete data.
5. From End Period, select the ending period from which to delete data.
6. From Category, select the category data to delete.
To delete all category data, leave blank.
7. Click OK.
8. Optional: Click Schedule to schedule the job.
For information on scheduling jobs, see Scheduling Jobs.
7-5
Chapter 7
Maintain Setup Data
• AIF_PROCESS_DETAILS
• AIF_PROCESS_LOGS
• AIF_PROCESS_PARAMETERS
• AIF_PROCESS_PERIODS
• AIF_PROCESS_STEPS
• AIF_BAL_RULE_LOADS
• AIF_BAL_RULE_LOAD_PARAMS
• AIF_BATCH_JOBS
• AIF_BATCH_LOAD_AUDIT
• AIF_TEMP
It accepts the number of days to keep as a parameter.
7-6
Chapter 7
Purge All Imported Data
Before using this option, reconcile the differences in the file formats. For example, the header
row in the Data Export to File contains the name of the dimension and not UD1, UD2 etc.
For more information about the Data Export to File option, see Creating a Data Export File.
To upgrade custom applications:
1. On the Workflow tab, under System Maintenance Tasks, select Upgrade Custom
Applications.
2. Click Execute.
3. From Execute Script, and then in Value, specify the name of the custom target
application to migrate from the LOV.
To migrate all custom applications, in Value, enter All Custom Applications.
4. Optional: To browse custom target applications, click , and from the Search
Parameter Value screen, select the custom target application and click OK.
5. Click OK.
6. Optional: Click Schedule to schedule the job.
For information on scheduling jobs, see Scheduling Jobs.
7-7
Chapter 7
Purge All Imported Data
Note:
There is no backup to recover any purged data. It is recommended that you
use extreme caution before executing this process.
Note:
Drill regions are not deleted as part of the purge process. If you need to
delete a drill region, then delete it manually.
Note:
All setup data for example application registration, import format, and
mapping are retained and not impacted by the purge process.
7-8
Chapter 7
Lifecycle (LCM) Snapshots
4. From Execute Script, and then in Confirm Delete All Imported Data, select Y (yes) to
launch a confirmation message before executing a purge.
Otherwise, enter N (No).
5. In Reason, specify the reason for executing the purge.
The reason is shown on the Process Details log.
6. Click OK.
The message: Custom script executed initiated with Process ID: XXX is displayed
(where XXX is the process id number generated for job). You can access the log from the
Process Details page.
7. Optional: Click Schedule to schedule the job.
For information on scheduling jobs, see Scheduling Jobs.
7-9
Chapter 7
Lifecycle (LCM) Snapshots
snapshot then can be used to migrate artifacts, setup and data to another
environment. Additionally, Service Administrators can create full backup snapshots of
the environment or incremental backup snapshots of artifacts at any time.
See the following topics about the Snapshot process:
• Importing Snapshots
• Exporting Data to a Snapshot
• Using Lifecycle (LCM) Snapshot Modes
Importing Snapshots
A snapshot data import enables you to restore setup and historical artifacts from one
environment to another. Data Management clears the existing data in the target
environment and then imports the data from the backup files without merging any
operations.
Note:
You can import a snapshot to the same or older release. You cannot import a
snapshot that is newer than the source version.
7-10
Chapter 7
Lifecycle (LCM) Snapshots
7-11
Chapter 7
Lifecycle (LCM) Snapshots
5. Click OK.
The message: Custom script executed initiated with Process ID: 0.
You can access the job log from the Process Details page.
Note:
The Process ID for the snapshot import is always " 0" only when the file
is exported in ALL, INCREMENTAL or ALL_INCREMENTAL modes
because each of these three exports contain data artifacts and process
details are part of data tables.
The Process ID is not "0" when only setup data is exported. This is
because setup data doesn’t contain data tables (Process details). In this
case, the next available sequence in the Process ID is used.
7-12
Chapter 7
Lifecycle (LCM) Snapshots
7-13
Chapter 7
Lifecycle (LCM) Snapshots
Note:
If the snapshot type is Setup, then only the setup folder is exported and
included in the ZIP file.
7-14
Chapter 7
Lifecycle (LCM) Snapshots
6. Exports data and metadata of the workflow process status for a location, category, and
period.
7. Deletes the files under an /output folder for any POVs that have been deleted after the
last export.
8. Archives the setup and data folders into a ZIP file in the outbox/<filename>.zip folder.
Note:
When the snapshot type is set to INCREMENTAL ALL, all files are included in the /
output folder inside the ZIP.
When the snapshot type is set to INCREMENTAL, only the incremental files
exported in the current process are included in the under /output folder inside the
ZIP.
7-15
Chapter 7
Lifecycle (LCM) Snapshots
7-16
Chapter 7
Lifecycle (LCM) Snapshots
Tip:
If you specify an existing output file and specify No to overwrite the existing
output file (ZIP), the job fails and the system shows the message: "Rule
Execution did not complete successfully."
6. Click OK.
One Execute Snapshot Exports job can be executed at a time.
The message: Custom script executed initiated with Process ID: XXX is displayed
(where XXX is the process id number generated for the job).
You can access the job log from the Process Details page in Data Management.
You can view the snapshot export ZIP after the snapshot export has completed by
downloading the output ZIP in Process Details.
You can also download the snapshot ZIP using the EPM Automate downloadFile
command.
7. Optional: Click Schedule to schedule the job.
7-17
Chapter 7
Lifecycle (LCM) Snapshots
Note:
You can import a snapshot to the same or older release. You cannot import a
snapshot that is newer than the source version.
7-18
Chapter 7
Lifecycle (LCM) Snapshots
7-19
A
TDATASEG Table Reference
The TDATASEG table is used to store the data loaded by the user and any transformations
between the source dimension members and results of the mapping process.
Note:
When loading text, the column in TDATASEG it is loaded to DATA, and the mapped
result is loaded to DATAX.
A-1
Appendix A
A-2
Appendix A
A-3
Appendix A
A-4
Appendix A
Note:
When
importing
data from
Financial
Consolidati
on and
Close,
attribute
columns
ATTR2 and
ATTR3
should not
be used for
any other
dimension
mappings.
A-5
Appendix A
Note:
When
importing
data from
Financial
Consolidati
on and
Close,
attribute
columns
ATTR2 and
ATTR3
should not
be used for
any other
dimension
mappings.
A-6
Appendix A
A-7
Appendix A
A-8
B
Oracle HCM CloudExtract Definition Field
Reference
The following tables list the Oracle Human Capital Management Cloud fields belonging to
each predefined extract definition. These fields are a subset of the data that can be extracted
and loaded into a Oracle Hyperion Workforce Planning or Oracle Strategic Workforce
Planning Cloud application from each extract definition.
• Account Merit Extract Definition Fields
• Assignment Extract Definition Fields
• Component Extract Definition Fields
• Employee Extract Definition Fields
• Entity Extract Definition Fields
• Job Extract Definition Fields
• Location Extract Definition Fields
• Position Extract Definition Fields
B-1
Appendix B
Assignment Extract Definition Fields
B-2
Appendix B
Component Extract Definition Fields
B-3
Appendix B
Entity Extract Definition Fields
B-4
Appendix B
Position Extract Definition Fields
B-5