In this tutorial we will discuss some scenario based questions and their solutions using SAP Data Services. This article is meant mainly for Data Services beginners.

Consider the following Source data in a flat file:


Scenario 5: Lets try to load the target table data as below:



1. Let us first define the Source File Format. This same file format will be reused for the next set of the scenario questions.

File Format

2. Next we use the same Batch Job, JB_SCENARIO_DS. Within the Job we create a Data Flow, say DF_SCENARIO_5.

3. At the Data flow level i.e. Context DF_SCENARIO_5, we Insert a new Parameter using the Definitions tab. Lets name it as $PREV_NAME with Data type varchar(100) and Parameter type as Input.

Parameters- Data flow

Parameter Properties

At the Job level i.e. Context JB_SCENARIO_5, we initialize the Parameter $PREV_NAME using the Calls tab. We set the Argument value to NULL.

Parameters- Job

Parameter Value

4. Next we create a New Custom Function from the Local Object Library. Lets name it CF_CONCAT_ENAME.

Custom Function

Within the Custom Function Smart Editor, first we Insert two Parameters, namely $CURR_NAME and $PREV_NAME with Data types as varchar(20) and varchar(100) respectively. Their Parameter type being Input and Input/Output respectively.

Custom Function Definition

Also we modify the Return Parameter Data type to varchar(100).

5. Next we define the custom function as below and Validate the same.


Return $PREV_NAME;

The purpose of defining the Parameter and Custom Function is to perform Parameter Short-circuiting. Here within the function, we basically set the $PREV_NAME Parameter of type Input/Output to concatenate all employee names till the current processing row. Since it is of type Input/Output the concatenated string value is passed back into the Dataflow Parameter. So by using Custom Function we can modify and pass values to a Dataflow Parameter. Hence the Parameter defined at Dataflow level is short-circuited with the Input/Output Parameter of the Custom Function.

6. Lets go back and design the Data flow. First of all we take the File Format defined earlier, from the Local Object Library as Source.

Data flow

7. Next we place a Query transform, say QRY_SORT. First we select the columns DEPTNO and ENAME from the Schema In of the Query transform and Map to Output. Specify the ORDER BY on DEPTNO and ENAME in Ascending type.

Query- Sort

8. Next we place a Query transform, say QRY_CONCAT_NAME. First we select the columns DEPTNO from the Schema In of the Query transform and Map to Output.

Next we specify a New Function Call in Schema Out of the Query transform. Choose the Custom Functions from the Function categories and select the Function name CF_CONCAT_ENAME.

Next we Define Input Parameters. We specify the inputs as below:


Function Input Parameters

Select the Return column as the Output Parameter.

Query- Function Call

9. Next we place a Query transform, say QRY_FORMAT. First we select the columns DEPTNO and Return from the Schema In of the Query transform and Map to Output. Rename the Return column to ENAME_LIST.

Query- Format

10. Finally we place a Template Table as Target in the Target Datastore.

Data Preview

Click here to read the next scenario - Cumulative String Aggregation partition by other column.

Have a question on this subject?

Ask questions to our expert community members and clear your doubts. Asking question or engaging in technical discussion is both easy and rewarding.

Are you on Twitter?

Start following us. This way we will always keep you updated with what's happening in Data Analytics community. We won't spam you. Promise.