First, you need to define the structure of your file or payload, and then you can map source data values to the destination or to specific output fields.
Define Destination Fields and Their Structure
Here you will define the field structure of your destination data set.
Fields can be added all at once using the Import button or manually using the Add button. When using a Dayforce Report as the source, you can click Autofill from Report in Step 3 to populate the name, description, and data type of each field from the report in the same order as the report. If you add manually, a pop-up opens where you can define the appropriate field settings. See the table below for a list of the potential fields in this pop-up. Child fields can also be added by clicking the action icon on a parent field and clicking Add.
If you import, a pop-up opens, allowing you to select a file or JSON payload that mirrors the final structure that you want Integration Studio to create when the integration runs. During the import, Integration Studio reads the file structure and content and automatically populates as many settings as possible. These settings can include data type, data format, and relationships between fields (such as parent-child in XML and JSON).
When you select XML File as the output type, you also have the option to import an XSD file that defines the destination fields and their structure. This reduces the time spent on manual set-up and configuration. When you import an XSD file, the Required Field checkbox in the Add Element dialog box and the Include Null/Empty Values in the Output toggle that is available when your integration is not configured to use the Character-Separated Values/Delmited Text File output type, are updated by the MinOccurs
field and the Nillable
field in the XSD file.
Fields are designated as required and null or empty fields are included in the output file when the MinOccurs
field is greater than 0
and the Nillable
field is set to false. In addition, fields are not designated as required and do not include null or empty fields when the value for the MinOccurs
field is set to 0
and the value for the Nillable
field is set to true. Otherwise, fields are not marked as required. However, null or empty fields are included in the output file.
Note: When you are importing a sample file, Integration Studio processes the values true, false, 1, and 0 as boolean values.
When you are importing fields, Integration Studio sets the Source to <Source Vendor> field for outbound integrations and <Vendor> Field for inbound integrations by default. Make sure to review these settings, because the defaulted settings might not be right for every integration.
Important: When using the Import for a character-separated/delimited text output, make sure the delimiter in the sample file you are importing is the same as the delimiter you configured in the Output Format in Step 2.
Each field has an action icon (three vertical dots) at the far-right side of the field name. If you want to edit field settings such as Display Name, or you want or to modify existing parent-child relationships, click, the action icon, and then click Edit in the menu that opens.
Important: Fields that share a parent element that is designated as an array are repeated in the output file. For example, to return all direct deposit records, all of the destination fields that are mapped to the Direct Deposit collection in Dayforce must have the same parent, and the parent must have the Is Array box selected. Use the filtering options in the Choose Record for <Field Name> dialog box to return a single record.
Field | Description |
---|---|
Type | Data or Parent. Required field for all output types. Data is selected by default. |
Field Name | Name of the export field as it is shown in the output. If no Display Name is entered, the Field Name is shown in Dayforce. Required field. |
Display Name | Name of the export field as it is shown in Dayforce. |
Description | Comments or notes for user reference. The contents of this field don’t impact the generated file. |
Configure Expression on Parent | This checkbox, available for the Parent type, becomes active when you have added at least one child element to the parent element you created. This checkbox is available for all output types. The parent field can be edited in the Parent Expression editor, which includes the IDL Expression and IDL Schema tabs. For more information on working with expressions, see Expression. |
Data Type |
The data type expected by the vendor, such as String, Number, Date/Time, or Boolean. Required field. |
Data Format | Formatting options when the Data Type is set to Number or Date/Time. Required field. |
Custom Format | Customize the data format when you select Custom from the Data Format drop-down list or String from the Data Type drop-down list. |
Required Field | True or False. If true, (checkbox selected), the integration will fail at runtime if this field is blank or empty. |
Use Display Name in Output | Select this checkbox to use non-unique headers in your CSV files. This checkbox is only shown when the Character-Separated Values/Delimited Text File format is selected in the Output Type field from the Output Format section in the Define the parameters of this integration step. If you select the Use Display Name in Output checkbox and you leave the Display Name field empty, the name entered in the Field Name field is used as the header in your output. |
Width | Define the width of the field. Required for Fixed Width Text Files and Segmented Fixed Width Files. If included for other input types, data populating the field will be truncated if the source data exceeds the maximum width value defined here. |
Padding Character | Select the character to be used for padding when the total for the characters in the source is less than the total set for the Width. Required for Fixed Width Text files and Segmented Fixed Width Files. |
Custom Padding Character | Enter the custom padding character to be used when padding blanks. This field is shown when you select Custom from the Padding Characterdrop-down list. |
Padding Alignment | Define the location of the padding, left or right, relative to the source data. |
Group Fields by Key | Group fields to apply context to data obtained from a flat data source such as, a Dayforce Report. This toggle can be used when you configure a parent array field that has at least one child under it. |
Parent | List of fields marked as Parent for Type. Required for XML and JSON. |
If fields need to be rearranged, drag and drop them to the appropriate location within the same parent. You can also change the parent field of a child field, if needed. This can be done by clicking the action icon beside a field and then Edit. In the Edit <Field Name> Element dialog box that opens, select a parent field from the Parent drop-down list. You cannot move a field under a parent field that already has a child field with the same name.
Filtering
Local Filters
The Local Filters screen opens when you click the Filters button beside a parent field. You can configure local filters and filters on a sum in this screen. In addition, you can configure a Choose Record filter on fields within an array. This ensures that you are only sending the appropriate information to a target system. For example, you can configure an integration to only send the data of employees that are assigned a specific value in a specific employee property.
For example, you can set a filter to exclude all terminated and inactive employees, or to include only active employees, or even to include only those employees who have been hired in the last 30 days.
Filtering for Unmapped and Required Fields
You can filter for unmapped and required fields in your integration by using the following switches in the mapping step:
- Show Only Unmapped Fields
- Show Only Required Fields
The Show Only Required Fields and the Show Only Unmapped Fields switches are disabled by default and are not shown in Integration Studio for integrations that are edited entirely in IDL.
Filtering on Parent Destination Fields
All parent fields in the destination structure, including the root field, have a Filter button beside them. You can group criteria fields together using the AND
or the OR
operators when you are configuring a filter for a parent field that is mapped to a specific source array. You can also apply these operators at different levels in your destination structure to customize what data is included in the output. For example, you can include or exclude an entire parent node from an integration run when the nested child fields don't contain any data by enabling or disabling the Include Null/Empty Parent in Output switch. In addition, you can configure filters that cannot be configured in the UI by using a Parent Filter Expression.
Field options in the Local Filters screen that opens when you click the Filter button beside the root field, are restricted when the root field is mapped to a source array.
Parent fields that are arrays must be mapped to a source array before you configure a filter. When selected, the criteria field and the comparison values in the filter are restricted to data fields under the selected source array, including child arrays if any. You can configure a filter without mapping to a source array when a parent field is not an array.
For all output types, the root field and each parent in the destination field structure has a Filter button beside it. When you click the Filter button, the Local Filters screen opens. Field options are restricted to data fields under the selected source array, including child arrays if any.
Note that the Filter button isn’t shown for Parent elements that have the Configure Expression on Parent option selected.
Headers and Footers
Headers and footers are also known as leading and trailing records and are used to provide additional data. They are generally used for processing and auditing an integration in a target system, such as client ID or a count of individual records. When you configure headers or footers in your integration, the header is shown once at the top of a file and the footer is shown once at the bottom of a file.
To configure a header or a footer in an integration:
- Click Create Integration in Integrations > Integration Studio.
- Select the connector you want to use to configure your integration.
- Click Add in the mapping step.
- Select Parent from the Type drop-down list in the Add Element dialog box.
- Important: Do not select the Is Array checkbox.
- Click Save.
- Put the data fields under the parent field you created to create a header or place the parent field at the bottom to create a footer.
You can configure an integration to have multiple headers or footers, if required. The image below illustrates a configuration that returns two headers. Client is the first header and it has the ID field nested underneath it. PayRun is the second header and it has the PayDate, PPStart, and PPEnd fields nested underneath it.
In the image above, the root, Client and PayRun fields are not arrays whereas, the Data field is an array. As a result, the fields that are under the Client and PayRun parent fields are shown once at the top of the file, and the fields under that Data parent field are repeated for every record in the source.
Output Type | Description of Limitation |
---|---|
JSON Character-Separated Values/Delimited Text File |
When the root field is an array, headers and footers are repeated for every record in a source. |
Fixed Width Text File Segmented Fixed Width File |
Each row in a file must have the same total width, including the headers and footers. |
Important: Headers and footers cannot be configured in integrations that use the Character-Separated Values/Delimited Text File output type. To use headers in a CSV file, you must configure your integration to use the Segmented Character-Separated Values/Delimited Text File output type.
Create Custom Tokens
Tokens let you to use system and user defined values in the mapping step of an integration. Currently, you can select configurable and system tokens in the mapping step for all integrations. Custom tokens, instead are unique to a specific integration because they are user defined and dependent on the selected source.
Token Name | Description |
---|---|
Configurable Tokens | Pre-defined value based on the integration. For example, by default INTEGRATION_RUN_COUNT is a count of all times the integration has run since being created. It increments with each run, but users can reset the value at any time. |
System Tokens |
Pre-defined, system values that cannot be edited. |
Custom tokens |
User defined IDL expression that can be written once, and mapped to any number of destination fields Note: As of the 2025.1.0 release, you can only map custom tokens directly to a destination field. |
To create a new custom token:
- Open an existing integration.
- Click Map fields from Dayforce to <Vendor> to expand the step.
- Click Manage Tokens.
- Click Custom Tokens to expand the section.
- Click Create New Custom Token.
- Define the Token Details.
- a. Name: Enter a name for your custom token.
- b. Data Type: Select the data type for your custom token. For example, a custom token that is configured to use the Date/Time data type can only be mapped to a Date/Time destination field.
- c. Description: Enter a description for your custom token. You can use this field to add additional information about your custom token, if needed.
- Define the parameters in the Parameters section and write the required IDL expression
- Click Save.
You can view a custom token in the Manage Integration Token screen. Additionally, you can edit and map a custom token to a destination field.
Note: You can disable or delete tokens that are not mapped. Disabled or deleted tokens are not shown as options in the mapping step. You can re-enable a token that you previously disabled at any time; however, deleted tokens cannot be recovered. Only custom tokens can be deleted.
Grouping Fields
Change the Structure of Your Data
Data sent between computer systems must be organized so that it can be processed without human intervention. For example, the GET Time Data
or Quick Entry
API is in the context of time entry records or quick entries and each record in the response, represents one time entry record or one quick entry. Obversely, the Employee Bulk and the Ongoing Benefit Carrier Exports are in the context of an employee, so each record in the response represents one employee.
For more information on understanding your data structure, see Working with APIs and The Importance of Understanding Your Data Structure.
Sometimes, the target system needs the data to be organized in a certain way. For example, the accounting system requires a GL Summary file organized by a region, but the source data is organized by employee. When the destination structure is defined, you can enable the grouping functionality on the appropriate array parent. Depending on what information is being reported, additional configurations may be required on child fields.
To enable grouping:
- Click the action icon beside the parent field you want to enable the grouping functionality on.
- Click Edit.
- In the Edit <FieldName> Element dialog box, click the Is Array switch, if disabled.
- Click the Group Fields by Key switch.
- Click Save.
All child fields under a grouped parent and child fields of an object parent below a grouped parent are either Keys or Totals. By default, most fields are considered Keys, and are used to group the source data. Totals are used to report aggregations that can be either counts or sums that are related to the grouped data.
For example, a GL Summary file could report the Region, Address, and related FEIN as well as the headcount and a sum of debits and credits. The Region, Address, and FEIN fields are Keys, but the headcount and sums of debits and credits are Totals that are reported for each unique combination of Region, Address, and FEIN.
By default, all child fields are marked as Keys when you enable the Group Fields by Key switch. Some of the fields that are marked as a Key might actually be a Total. they are displayed for your review prior to applying the grouping configuration. To mark a field as a Total, click the action icon beside the field and then select Mark as Total. Similarly, you can click Mark as Key to mark a field as a Key.
Important: Keys cannot be configured for a source that uses Aggregation. However, Keys that are configured for a source other then Aggregation must result in a single value. For example, a key configured as the concatenation of an employee’s work assignment location and badge number would cause the integration to fail if multiple work assignments exist in the source and a Choose Record filter was not applied to limit the key to a specific work assignment.
Your integration can also fail or generate unexpected results when the Keys and Totals are not properly assigned.
Source and Source Details
Field Configuration
Each field must have a source, which defines where to get the data for a particular destination field:
- Source Vendor Field
- User Defined Value
- Mapping
- Conditional Mapping
- Token
- Transformation
- Expression
- Aggregation
Some of the source options listed in the Source drop-down list require you to select an option from the Source Details drop-down list, while others don’t. If a selection is required, the Source Details drop-down list is shown.
Note, if you selected Dayforce Report in the step Define the parameters of this integration, the button Auto-fill from Report is shown to the right of the Add button. Integration Studio populates the name, description, and data type of each field from the report in Step 3, in the same order as the report.
This is a direct, one-to-one relationship: one <Source Vendor> field linked directly to one Destination Vendor field. For example, in outbound integrations, if the vendor field EmployeeFirstName
is the same as the Dayforce field FirstName
, Source can be set to Dayforce Field, and Source Details can be set to Data > FirstName.
You can use the type-ahead search to find the source field you need, because some Source HCM Anywhere APIs offer a large list of fields to select from. For example, if you are using the Bulk Employee API, enter xrefcode paytype employmentstatus in the Source Details drop-down list to filter the list of possible fields to select when you are mapping to the employee’s Pay Type XrefCode value. If you’re not sure which field you need, start with the most general term (for example, xrefcode) and get more specific to reduce the overall number of results returned.
If the selected source field could have more than one assignment per employee, such as a Global Property
in an outbound integration, a filter icon will appear to the right of the drop-down. Click the icon to restrict the returned data to a single value. Once the filter has been applied, the icon turns blue. More information on this topic is available in The Importance of Understanding Your Data Structure.
This is a static value provided by the user. Manually enter a value or leave the field blank if the field should always be empty. For example, if the first field in every XML integration should be the company name, the Source of the first field can be set to User Defined Value, and the user can type the company name into the Source Details text box.
Acceptable user defined values vary based on the Data Type of the destination field. If the data type is Date/Time, only Date/Time values are allowed. Similarly, if the data type is Number, only integer and decimal values are accepted.
The Mapping option is used when source values need to be mapped to destination values. The source assumes that there is a direct relationship between a <Source Vendor> field and a Destination Vendor field. However, the values contained within might vary between systems.
For example, if the vendor field Status
is derived via the Dayforce field EmploymentStatus > Xrefcode
, the Source can be set to Mapping. Source Details can be set to EmploymentStatus > Xrefcode
, and, after you click Edit, you can import the list of required status codes in Dayforce and their relative values in the receiving system (for example, ACTIVE
in Dayforce to 01
in the target system, and so forth). The Else
clause is used when the Dayforce field is missing or the value is not included in the list of mapped values.
Before you can link source values with Destination Vendor values, you must define the Destination Vendor values in Dayforce. After you select the Source, click Edit, and then click either Edit Vendor Values to add them one by one, or click Import to import them all at once. The best practice is to import, because that will create the vendor values and also streamline the assignment of Dayforce values to vendor values. If you’ve already manually configured required vendor values in one integration, and you need the same set for another integration, click Export to get the full list.
Important: If the field to be mapped has a Data Type of Boolean, importing vendor values isn’t supported.
If the selected Dayforce field to be mapped can have more than one assignment per employee, such as a Global Property
in an outbound integration, a filter icon will appear to the right of the drop-down. Click the icon to restrict the returned data to a single value. Once the filter has been applied, the icon turns blue. More information on this topic is available in The Importance of Understanding Your Data Structure.
The Conditional Mapping option offers logic that allows for the evaluation of multiple criteria sets to determine the output for a given field. There is no Source Details selection for this option, so once Conditional Mapping is selected as the Source, click Edit.
For example, the vendor field PayClass
could be one of several values, depending on the country that an employee lives and works in. You can set the Source to Conditional Mapping, click Edit, and then define one or more criteria sets.
Each numbered criteria set can return one Result and each set is made up of one or more conditional statements. If none of the criteria sets evaluate to true, the Else clause will be used.
For each condition, you must define the Criteria Field (field used for comparison), Operator (type of comparison), and Comparison Value (value compared to Criteria Field). The Criteria Field and Comparison Value each also contain source selector drop-downs, which allow you to compare a source field to another source field, a user defined value, a token, or the result of a transformation of Dayforce fields.
If any selected source field could have more than one assignment per employee, such as a Global Property
in an outbound integration, a filter icon is shown to the right of the drop-down. Click the icon to restrict the returned data to a single value. After the filter has been applied, the icon turns blue. Additional information is available in The Importance of Understanding Your Data Structure.
To add additional criteria sets, click Add Criteria Set. To copy an existing criteria set, click the Copy icon at the top right of the criteria set box.
Criteria sets are evaluated in order. Use the Up and Down icons at the top right of the criteria box to rearrange the order of the criteria sets, to ensure that they are evaluated in the appropriate order.
To remove extra criteria sets, click the Trashcan icon in the upper right-hand corner of the criteria set. Also note that each conditional statement has an action icon (three vertical dots) to the right that opens a list of actions available for the corresponding entry. To add or remove a conditional statement, click the action icon and click either Add or Delete.
Important: All conditional statements within a criteria set must evaluate to true in order for the configured result to generate.
The following types of tokens are available in Integration Studio:
- System Tokens - values cannot be edited.
- Configurable Tokens - values change based on the integration run.
- Custom Tokens - values are defined using an IDL expression.
Currently, the following pre-defined tokens are supported:
Token Name | Description |
---|---|
[INTEGRATION_RUN_DATE]
|
This token returns the date at runtime. Used if the integration should specify what date and time it was generated. |
[LAST_SUCCESSFUL_RUN_DATE_UTC]
|
This token returns the date of the latest integration run with a status of Completed. The time and date returned is in Coordinated Universal Time (UTC). Note: Integration Studio uses the Effective From date when the Effective From date is more recent than the last run date and time or when you run the integration for the first time. |
LAST_SUCCESSFUL_RUN_DATE_DAYFORCE_TIME]
|
This token returns the date of the latest integration run with a status of Completed. The time and date returned reflects the standardized time used by specific Dayforce servers. Note: Integration Studio uses the Effective From date when the Effective From date is more recent than the last run date and time or when you run the integration for the first time. |
[TERM_MINUS_ONE]
|
Commonly used transformation when integrating with other HCM systems: Employee Termination Date minus one. Available only when the source API is the Bulk Employee API |
[INTEGRATION_RUN_COUNT]
|
This token returns an integer that represents the number of times an integration ran successfully. The run count for this system token starts at 1 and is incremented sequentially by 1 with each successful run of an integration. The run count is reset to 1 when the source API is changed or when a new integration is created or imported. You can also reset this token from the Manage Integration Tokens screen that opens when you click Manage Tokens in the mapping step. The value entered must be an integer that is greater than 0. |
By default, the LAST_SUCCESSFUL_RUN_DATE
tokens listed above are not updated during ad-hoc runs, so that data can be resent without impacting the data this is already included in the next scheduled run. To update the tokens during an ad-hoc run, select the Update Last Successful Run Time checkbox in the Run <Integration> dialog box, and then click Run.
Source Details selection for Tokens is limited by destination field data type.
You can manage tokens used in your integration by clicking Manage Tokens in the mapping step. In the Manage Integration Tokens screen that opens, you can reset the value of a system token in the Configurable Tokens section, if needed. Only system tokens can be reset, such as the INTEGRATION_RUN_COUNT
system token and the value you enter must be an integer that is greater than 0.
You can manage all other system tokens in the Static Tokens section.
Transformations allow you to manipulate one or more source fields before mapping to a destination field, the Result of a criteria set, or the Else clause in Conditional Mapping. Similarly, a transformation can be used as a Criteria Field or Comparison Value within Conditional Mapping.
There is no Source Details selection, so after you select Transformation as the Source, click Edit, select the appropriate transformation, and populate the required fields.
For example, if the destination vendor field Assignment
is a combination of two underscore separated source fields, Position
and Department
, Source can be set to Transformation and when you click Edit, select the Combine Multiple Fields transformation, give the transformation a sensible name, select Underscore as the separator, and select the appropriate source fields to combine.
If a selected source field in the Transformation configuration can have more than one assignment per employee, such as a Global Property
, a filter icon is shown at the right of the drop-down. Click the icon to restrict the returned data to a single value. After you have applied filter, the icon turns blue. More information on this topic is available in The Importance of Understanding Your Data Structure.
Transformation options are limited by data type. For example, to map using Combine Multiple Fields, the destination field Data Type must be set to String. Similarly, to use Add (+) or Subtract (-) Time as a Comparison Value in Conditional Mapping, the configured Criteria Field must have a data type of Date/Time.
Saved transformations can be reused in other field mappings, as well as custom tokens in the File Name and API URL for Event Driven integrations. To reuse a transformation, open the Transformation page and instead of selecting Configure New <Transformation Name>, select the appropriate Saved <Transformation Name>. If a transformation is no longer needed for the integration, delete it by clicking the trash can to the right of the transformation and confirm deletion.
If you delete a saved transformation by clicking the trash can icon to the right of the Transformation drop-down list, it is removed from all fields. To change the loaded transformation, select a different option.
See the “Options in the Transformation drop-down list” table for a full list of available transformations. Multiple instances of the same transformation can be created or differentiated by the transformation name. By default, the transformation options are as shown preceded by “Configure New”. However, once the transformation is saved, the user-defined name is shown in the list preceded by “Saved”. For example, “Configure New Combine Multiple Fields” versus “Saved Combine First, Middle, and Last Name”.
Transformation Option | Return Data Type | Description |
---|---|---|
Combine Multiple Fields | String | Combine multiple source fields, user-defined values, tokens, or a combination of the items listed into a single destination field. |
Replace Text | String |
Replace one or more characters in a given field. This includes alphanumeric characters as well as special characters, such as an asterisk (*). Important: The string to be replaced is case-sensitive, so be sure to enter the value with the appropriate capitalization. |
Return Portion of a Text Field | String |
Return one or more characters from the Source Field by using the Starting From and the Until drop-down lists. The options in the Starting From drop-down list are Beginning of, End of, and Specific Position. When you specify a position to start, the character in the specified position is included in the result, up to the position you set in the Until field. The options in the Until drop-down list are Specific Character, Specific Position, and Number of Characters. The first character sits in position 1. |
Remove Leading & Trailing Whitespace Characters | String | Removes any leading and trailing whitespace characters from the selected source field. |
Return Portion of Delimited String | String | Returns values that exist between two instances of a given character in Integration Studio. |
Add (+) or Subtract (-) Time | Date/Time | Modify a date value. To add to the date selected, enter a positive integer. To subtract from the date provided, enter a negative integer. |
Configure Return Relative Date | Date/Time | Returns a date that is relative to another date. |
Calculate Duration of Time Between Start and End Date | Number | Calculate the duration of time between two dates. |
Convert String Field to Numeric | Number | Convert a source string field to a numeric value to align with your vendor’s requirements. For example, in outbound integrations, the Employee Number field is stored as a string in Dayforce, but many vendors only accept numeric inputs for the Employee Number field. |
Calculate Length of a String Field | Number | Calculate the number of characters in a string. For example, your payroll partner might need to know the number of digits in a bank account number stored in the source system for their internal system logic. |
Change Sign of a Numeric Value | Number | Fields that are expressed as positive values in the source can be sent as a negative value in the final output file if your vendor’s system requires it. For example, a deduction is stored as a positive value in Dayforce, but many vendors store a deduction as a negative value. |
Configure Sum Values | Number |
Sums all instances of a source field. You have the option to define a filter to limit which values should be summed, if needed. This functionality is particularly useful when you are configuring WFM or Payroll exports. You can configure payroll and (GL) integrations to return the absolute value for fields that use the Configure Sum Values data transformation by enabling the Return absolute Value of Sum switch. |
Configure Absolute Value | Number | Returns the absolute value for fields. |
Return Portion of a Date | Number | Returns a partial date. You can define the month, day, year, of a date from the source. |
Count Records | Number | Map fields based on the number of records within an array that meet your criteria. This transformation can only be used in conditional mappings; use Count in Aggregation source type when mapping directly to a field. |
Expressions allow complex field configuration as well as use of functions not yet available elsewhere in Integration Studio using Integration Description Language (or IDL). For more information about IDL, see Reference: Integration Description Language.
By default, the IDL Expression Editor includes an example of a field level expression. The example expression is removed when you type in the editor. To streamline writing expressions, configure the field manually as much as possible. In Step 3, click View IDL, copy the IDL, and paste it into an expression sourced field. Configure required parameters and update all references to source fields with the newly created parameter names.
Information about IDL syntax and semantics is available in the topic Reference: Integration Description Language.
A full list of IDL functions is available in Functions.
More information about generated IDL is available in the topic View IDL.
Key Topics for Expressions
Key topics for Expressions in this topic are:
- Example Expressions
- Parameters
- Functions
- Configure Parent Level Expressions
- Common Expression Errors
In general, an expression is a snippet of IDL code which returns a specific value, or in the case of parent level expressions, specific values. Below are examples of simple IDL expressions, abstracted from any particular source, which could be combined to create more complex expressions:
Coalesce Expressions are used to handle values which may be missing in the source. If the value represented by the “possibly_nil_or_missing_value
” parameter is missing or nil in the source, the destination will use the provided alternative, “DEFAULT_STRING
”.
possibly_nil_or_missing_value ?? "DEFAULT_STRING"
Boolean Expressions determine whether a given condition is true or false. The example below returns the boolean value false (note that AND
has higher precedence than OR
):
true && false || true
Mathematical Expressions are used to compute a numeric result:
1 + 2 * 3 - 4
Returns the value 3. Note that PEMDAS/BODMAS is respected.
Function Call Expressions manipulate data using a function from the IDL function library:
string_concat("test", "ing")
Returns the string "testing".
Function Pipeline Expressions allow chaining of function calls:
date(2024, 3, 18) |> date_add(1, "day")
Returns the computed date 2024-03-19 from the date function call fed into the date_add
function call.
If Expressions return a value from one of the defined branches. For example, the expression below will return true as 5 is greater than 4.
if 2 + 3 > 4 {
true
} else {
false
}
Array Expressions define arrays of data, such as the array of blocks below:
[
{ foo = 111; bar = 222; },
{ foo = 333; bar = 444; },
{ foo = 555; bar = 666; },
]
Block Expressions defines a block (also known as “record” or “object”), composed of key/value pairs. Block expressions can only be configured when the integration has been converted to full IDL.
{ foo = 111; bar = 111 * 222; } --> { foo = 123; bar = 24642; }
Invalid Expressions are syntactically incorrect, such as when configuring a block expression in a field level or parent level expression, instead of converting the integration to full IDL:
test {
item = 123;
}
This expression is invalid because it ends in a semi-colon:
2 + 3;
This is invalid as a field level expression; however, this is the appropriate format for a destination field titled “test” within a parent level expression:
test = 1 + 2 * 3;
Before writing an expression, you must define one or more parameter names, select a source, and then map it. The sources include:
- <Source Vendor> Field
- <Source Vendor> Array
- Token
- User Defined Date
- User Defined String
- User Defined Number
Parameters are used to streamline expression configuration; they remove the need to type out the full source field path, and allow for testing. You should refer to them in your expression using the name that you’ve created. Once your expression is configured, you can provide test values and click Run IDL with Test Values to see the results in real time.
Note: Run IDL with Test Values is disabled when at least one array parameter is present. For more information about arrays and their importance in your data, see The Importance of Understanding Your Data Structure.
The <Source Vendor> Field
and <Source Vendor> Array
parameters reference fields or arrays in the source. When a value exists inside the top-level array, such as the XrefCode
for an employee in the HR Bulk API, that value can be referenced using a parameter with the <Source Vendor> Field
as the source, and the XrefCode
field selected in the detail drop-down.
On the other hand, you must use an array parameter to reference a field that is within a sub-array of the source. For example, the Deduction Election array is a sub-array of the employee data top-level array. To reference the XrefCode of the Deduction object within the Deduction Election array, you could configure an array parameter called deds
. You would write out the path to the Deduction XrefCode in the expression as deds.Deduction.XrefCode
. This is illustrated in the following screenshot:
.
Functions take inputs, or arguments, and use them to perform specific tasks. Individually the tasks seem simple, such as calculating the difference in days between two dates (DateDiff
) or checking to see if something exists (IsMaterial
), but expressions allow nesting functions within functions to perform more complex operations. The Expression Editor also exposes mathematical functions, such as addition, subtraction, multiplication, and division.
Note: Mathematical operations follow standard operator precedence (PEMDAS/BODMAS).
Values used as inputs to functions fall into one of the following categories: known, unknown, or missing. In an outbound integration with the HR Bulk API as the source, a known value would be the XrefCode of the employee. This is a required field in Dayforce and will always be present in the source for an employee. By default, the only values included in Dayforce sourced data are known values.
If a field used in the integration is missing in the source, Integration Studio treats it as unknown, or nil
. Fields configured without the use of expressions can treat unknown and missing values the same because all references to source data are wrapped in a coalesce
function by default. Expressions are built from scratch by the user, and unknown values must be handled explicitly, either by wrapping the parameter in a coalesce
function or ensuring the data set used for the integration is present at all times.
Handling missing values is critical to ensuring reliable integrations, because missing values are considered viral. If a field is referenced in configuration without a coalesce
function, and it is missing from the source, it is considered missing. As a result, the value on the generated file will be missing, whether the field references a single source value (as with a direct mapping), or several values (such as when concatenating multiple source fields). If any source field referenced in the expression isn’t wrapped in a coalesce function but is missing from the source data when the integration runs, the value on the generated file will also be missing.
Some IDL functions require you to use array parameters to iterate over data, such as:
Find
: Returns a single value from an array of data. For example, returning only Work Assignments that are listed as Is Primary.Length
: Returns the total number of items in a given array. For example, determine how many direct deposits exist for a given employee and omit records if more exist than can be consumed by the vendor system.Min/Max
: Returns the smallest/largest value from an array of data. For example, determine whether the Employee or Pay Group setting is smaller or larger for an Earning Election. Or, whether the Hire Date occurred earlier or later than the Rehire Date.
A simple expression on one of the footer or summary fields might sum all of the amounts in a report of earning quick entries. Assuming the parameters defined in Step 2 limit the payroll data to earnings, and an array parameter is defined referencing the top-level source array, src
, the following expression would return a running total of all earning amounts reported in the file:
sum(src, qe => qe.Amount)
Alternately, sums might be reported per earning. For example, one field might report the total amount paid of Regular, another Overtime, and yet another Annual Bonus. To accomplish this, each field needs the same source array parameter, src
, and a filter prior to the sum:
filter(src, prdata => prdata.CodeName =~ "Regular")
|> sum(qe => qe.Amount)
Officially, the sum
function requires an array as an input, or argument, which you can see in the first of the two preceding example, but not the second, though both are valid expressions. Function pipelines allow the result of one function to serve as the input of another function, without nesting them together. In the second example, the filter function returns an array of items that match the specified criteria (all of the records in the source that have a CodeName
of Regular
), and that array is fed into the sum function as the first argument via the pipeline |>
.
Another important component to functions is the lambda, or anonymous function. Lambdas take the form of arg => expression
and are generally required within functions that control or restrict the data returned, such as any
, filter
, find
, map
, sum
, and so forth. In the example above, the filter function accepts an array as input and uses a lambda to define the filter criteria:
prdata => prdata.CodeName =~ “Regular”
Similarly, the sum function accepts an array as input and uses the lambda to define the field to be summed.
qe => qe.AmountA
Some functions in Integration Studio might look interchangeable, such as is_material
and is_nil
, but they aren’t. These IDL functions are both intended to check for particular states of data within an expression, such as whether a field has a value of null
in the source, or whether an array contains any records. The validation performed by each function is different.
For example, a field in the source is_material
if it exists and has a value in the source, but is not null
. An array in the source is_material
if it includes at least one record. In addition, a field is_nil
only when it has a value of null
in the source.
is_material(“test”) // ==> true
is_material(“”) // ==> false
is_nil(“”) // ==> false
is_nil(nil) // ==> true
Note: In Integration Studio, nil
and null
represent the same state. When a field is null
in the source, it is nil
when being evaluated in IDL.
Configure Parent Level Expressions
Parent Level Expressions allow maximum flexibility in mapping data from source to destination by allowing any number of source arrays to be referenced and mapped to the same set of child fields in the destination. Additionally, otherwise basic configuration with nuance, such as the use of functions and filters for specific fields which cannot be configured in the UI, are possible with parent level expressions.
Some aspects of expression configuration are shared between parent and child fields
- Parameter types are consistent.
- Source Fields must be referenced by way of parameter.
- Source fields within an array must be referenced by way of an array parameter.
Parent level expressions also have unique aspects:
- Child fields must exist before you configure a parent level expression.
- Configuration on child fields occur within the expression on the parent.
- Testing parent-level expressions requires running the integration end to end.
A key use for parent level expressions is the merging of source arrays so that users can reference fields from multiple source collections within the same destination array. For example, to reference values contained in both the Earning and Deduction arrays from the Employee Bulk API in the same set of destination fields, add all of the required child fields under the parent, then edit the parent field to enable “Configure Expression on Parent”. When enabled, any configuration present on the child fields is wiped out and an Edit button appears to the right of the parent field.
By default, a sample expression with each child field listed is shown until you start typing. The final expression must contain the same opening and closing curly braces and each child field under the parent must have a valid IDL mapping. Given three destination fields, XrefCode, Value, and Type, all three would need valid mappings encased in the curly braces to save the expression.
Arrays at the same depth in the source, such as the Earning and Deduction arrays in the Employee Bulk, can be merged using the concat function, then mapped using the map function. For example, the parent level expression below concatenates the mappings for the Deduction Array, deds, and the Earnings array, earns, into the same set of destination fields.
concat(
map(deds, ded => {
XrefCode = (ded.Deduction.XRefCode ?? nil);
Type = (ded.Deduction.CalculationType.XRefCode ?? nil);
Value = (ded.EmployeeDeductionParameters.Items.Value ?? nil);
}),
map(earns, earn => {
XrefCode = (earn.Earning.XRefCode ?? nil);
Type = (earn.Earning.CalculationType.XRefCode ?? nil);
Value = (earn.EmployeeEarningParameters.Items.Value ?? nil);
}))
This expression could also be written using function pipelines, first mapping the Deduction array and concatenating it to the mapped Earning array.
map(deds, ded => {...see above example...}
|> concat(
map(earns, earn => {...see above example...}))
Arrays of differing depths in the source can be concatenated and mapped in a parent level expression, but only after the required source fields are organized via the flat map function.
The following expression first flat maps the top-level source array, src
, with the DeductionElections
array and creates a new array composed of the Employee Number and the Election information. This new array is fed as input to the map function, which include the mappings for each child field.
flat_map(src, Employee =>
map(Employee.DeductionElections.Items, DeductionElection => {
Election = DeductionElection;
EmpNumber = Employee.EmployeeNumber;
})
)
|> map(deds => {
EmployeeNumber = (deds.EmpNumber ?? nil);
ElectionXRefCode = (deds.Election.Deduction.XRefCode ?? nil);
ElectionValue = (deds.Election.EmployeeDeductionParameters.Items.Value ?? nil);
})
Note: Nested flat maps can be needed, if the record being reported is more than one layer deep, as in the preceding example.
More information about arrays and their importance in your data is available in the topic The Importance of Understanding Your Data Structure.
This section describes some common expression errors in Integration Studio.
Not an expression
The IDL expression provided isn’t valid. This can happen for a variety of reasons, including but not limited to:
- Unmatched parenthesis
- Inclusion of non-standard named blocks
- Missing semi-colons at the end of a block
- Expression ending with a semi-colon
- General syntax errors
- Unfinished binary operations (for example:
2 +
)
Path or variable does not exist: <Variable>
The specified value is undefined, most often caused by a discrepancy between a parameter name as defined and as used in the expression, an undefined parameter used in the expression, or incorrectly entered IDL keywords.
Wrong return type
The result of the expression is of a different data type than the one defined for the field. Review your expression to determine what the data type of the result is and confirm it matches the fields settings in Step 3.
Cannot apply operator
An operator used in the expression is incompatible with the data types of one or more of the referenced fields. Review your expression for the specified operator with the specified data types and update as appropriate.
Aggregation allows you to count the volume of specific elements processed and report the total in the generated file or payload. This is most commonly used in 834 benefit integrations to report the number of employees, dependents, and the sum of both, included in the file.
Aggregations are configured on numeric child fields, within a hierarchical structure, to count parent fields, or elements, within the configuration. For example, the following integration includes information about an employee’s work assignment, direct deposits, as well as a footer which will show the total volume of employees included in the integration. Each non-root parent (Employee
, WorkAssignment
, Direct Deposit
, Footer
) is an element that can be counted.
There is no selection in the Source Details drop-down list, so after Aggregation is selected in the Source drop-down list, click Edit. There is only one Aggregation Type available: Count. Select the element or elements (parent field) to be counted and click Save.