In our earlier article on Data Services we have learnt how to register the local repository with CMC. In this part of the article we will start using Data Services Designer.

Launching the BODS Designer to create Jobs

After the successful creation of Local repository, Jobserver configuration and registering repository in CMC you are now good to launch the BODS designer. After opening the designer you would be able to create and execute BODS jobs.

Here are the steps for launching the BODS designer-

1. Go to Start Menu and click on SAP BusinessObjects Data Services 4.0 SP1 -> Data Services Designer. A screen as shown below would be displayed (Figure 2.4) -

Figure 3.1

2. Enter the credentials as required

System – host[post] - The name or IP address of the machine on which the BODS server is installed(not to be confused with Database server).This BODS server could be the same machine on which you are working or can be installed on any other machine and connected via network.

User Name – This username is set by the Administrator user who installs the BODS tool. In this case I would be using the super user name “administrator”.

Password – The password set for the username administrator at the installation time.

3. After entering the credentials, Click on “Log On” and it would display all the repositories that we created and registered with CMC on that BODS server (Figure 3.2).We had only one repository created so that can be seen.

Figure 3.2

4. Select the repository (DS_LOCAL_TEST in this case) and click on OK (or double click on the repository selected) and the Designer gets opened as shown in figure 3.3.

Figure 3.3

5. The figure 3.3 shows the first screen we get once we launch the BODS Designer, and this is our playground. We create our BODS jobs, execute, and debug them using this Designer.

Creating Data Services Jobs

After the preparation steps and launching the Designer we are now set up to start our job. Before we start the programming there are program hierarchies that we need to understand.

  • Every job that we create in BODS should be embedded within a project.
  • The object “Project” is the highest in the hierarchy followed by “Job” then “WorkFlow”/”Conditions “and finally “Dataflow”, and in the Dataflow we drag and drop the various transformations or tables or objects to perform the actual job run.

The above statement may be little confusing or difficult to easily understand.

To make it simpler we will go with a real life example. Here BODS Designer is the world and Project would be country. This country can accommodate many states, so the states become our jobs. In the state there are districts and those are our Workflows or Condition flows and in those districts we have corporations/municipalities. Likewise in the workflows there are data flows. And in these corporations and municipalities there are people. Similarly in the dataflow there are tables and transformations.

World>Country >State>District>Corporations/Municipalities>People

Designer>Project>Job>Wrokflow/Condition>DataFlow>Tables/Transforms

We would familiarize with the windows that can be seen within the Designer window (like Local Object Library etc) on the go. As we show this in the coming examples and practice jobs these would be clearer.

Creating our first BODS Program

1.Click on Project ->New->Project.(Figure 4.1)

Figure 4.1

2.We would get a new window to create new project. Enter the name of the project you would like to assign. In this case we go by the name “Prj_Test”(Figure 4.2).

Figure 4.2

3.Click on “Create” button.

4.The Prj_Test is created. Now we have to create a job under the project.Right click on the project Prj_test that we just created and select the option “New Batch Job”(Figure 4.3). At present we are dealing with only batch jobs and not real time jobs. The basic difference between batch and real time job is TBD

Figure 4.3

5. Create a job with any name, We have used “Job_Test”(figure 4.4)

Figure 4.4

6.After creating the “Job_Test”, we are now ready to create workflows/conditions and dataflows. To execute transformations the minimum object we need to have inside Job is a dataflow. Workflows or Condition flows are not mandatory but is a good programming practice to use them. From the tool bar on the right (indicated in figure 4.5), click on work flow (second object from the top) and place it anywhere on the Designer workspace. Rename the workflow (any name).I have used WF_TEST1.

Figure 4.5

7.Double click on the workflow WF_TEST1. From the tool bar on the right , click on dataflow (third object from the top) and place it anywhere on the Designer workspace within the workflow. Rename the dataflow as ‘DF_TEST1’ (figure 4.6).

Figure 4.6

8. We can now see the objects that we created, both in the “Local Object Library” by clicking on each of the tabs and also in the “Project Area” in a hierarchical format (figure 4.8).

Figure 4.9

9. Next we would start the actual transformation. For this we would begin with creating a “Datastore”. As discussed earlier, datastore is a logical connection to database.

To create a datastore in BODS and to link it to any database first of all, we should have a database and few tables in that database already created .For example, In our case, I have created a database “db_Test_Data” in my database server and two tables tblEmployee and tblEmpSalary ( These were created at the database backend and not in BODS).

We would try to now import these two tables using datastore connection into the BODS staging area (Staging is another common term that is used to describe the BODS Designer area itself where the data transformations are done).Figure 4.10 shows the Database and tables I have created in SQL Server database.

Figure 4.10

10.Now we create the datastore to connect to Database and import the tables. Figure 4.11 shows datastore creation. Go to datastore tab in the Local Object Library and right click on it and a popup window would come up. Click on “New”.

Figure 4.11

11. Once “New” is selected a screen as shown in figure 4.12 would be displayed.

Figure 4.12

12. Enter the credentials in the textboxes and select appropriate options (Figure 4.13).

Figure 4.13

Datastore name – The name you would to name you new datastore that you are creating now. Here I have use “DS_TEST1”.
Datastore Type – In our case it is “Database”
Database Type – My database is “Microsoft SQL Server”
Database version – The version on my system is “Microsoft SQL Server 2008”.Select the appropriate option the you are using.
Database Server name – This should be the name or IP of the machine /server which has the tables that you are going to import. (not the one have your local repository database. It could be the same or different). I have created both in the same database sever.
Database name – Name of the database which contains the tables to be imported to BODS.
Username – User name which is used to log on to the database server.
Password – Password for the username which is used to log on to the database server.

13. After filling up these details click on “OK”.

14. Leave all the other check boxes and other buttons seen on the window for now. We do not need them for now.

13. We would now be able to see the datastore (DS_TEST1) which we just created on the Local object library as shown in the figure 4.14.

Figure 4.14

14. Click on the datastore DS_TEST1 to expand the tree three nodes can be seen namely “Functions”,”Tables” and “Template Tables”. Our next task is to import the tables “TblEmployee” and “TblEmplyeeSalary ” that we created at the database backend using this datastore. There a two ways one can do it.

  1. Double click on the “Tables” icon.
  2. Right click on the “Tables” icon and select “Import by Name”. Using this method we can import the tables faster from database, but one has to know the exact table name to do this.

Here I would go with method ‘1’,as I do not have many tables in my database.

15. After double clicking the “Tables” icon we would get a window as shown in figure 4.15.

Figure 4.15

Note - In case you have more tables in that database all of those tables would be listed. Since I have only two tables that I had created only those two are listed.

16. Select both the tables right click, a menu will popup click on “Import” (figure 4.16).

Figure 4.16

17. Now, once again click on the Tables icon on the Local object Library and you can see the tables we have imported in the tree (figure 4.17)

Figure 4.17

18. We now have the source system data in our datastore. (But this does not necessarily mean we have extracted the data.)

Exercise 1

As we have set up our datastore and are ready with the job, let’s try out a small exercise. We are here going to perform a small scale Data Migration or ETL, which includes the following scopes -

  • To extract this data from this source system for which we have created the datastore (DS_TEST1) into a staging area.
  • To do a small transformation on the incoming data by adding a new column to the table tblEmployee, called ‘EmpLastname’ and leave it blank.
  • To join the two tables tblEmployee and tblEmployeeSalary with the common field EmpNo.
  • Load the result into a table in another database.

(Do not panic seeing too many to-do items; these are easily doable tasks for a starter)

Solution design

Any programmer knows, there are always different ways to approach a problem. But, in the end, the most efficient and easy method is adopted. Here also we would follow the same approach. In BODS the best way to make you programs execute efficiently is to reduce the number of objects and flows where ever possible. This ability can be achieved over a period of time only, that too by practice and experience.

Let’s take our example; we can actually manage all the four requirements using just one dataflow. Here is how we do it –

1. Go to the job ‘Job_Test’ we had created, double click on the dataflow ‘DF_TEST1’.

2. Drag both the tables from the Local Object Library to the Designer workspace.

3. A dropdown menu would popup ( for both the tables) .Select ‘Make Source’ for both the tables(Figure 4.18)

Figure 4.18

4. Once both tables are dragged into the workspace, we would need to use BODS defined transformation objects. We cannot cover all of the transformation objects or functions, but here we are going to use one of the transform object, which is most useful and helpful transformation object; the ‘Query Transform’.

5. Click on the “Transforms” tab of the Local Object Library. Then, Click on the “Platform” icon to expand the node and we can see a list of transform objects. Drag the fourth transform from the top called ‘Query’ transform to the workspace between both the tables.

6. After placing the transform, Click and drag on the dark blue dot on the table (‘A’ in figure 4.19) and join the line to the inward arrow of the Query transform(‘B’ in figure 4.19). The table tblEmployee is already connected in figure 4.19 and the other table is still not connected.

Figure 4.19

7. Once both the tables are connected to the Query transform, open the query transform by double clicking on it. Inside the query transform we can see both the source tables that we have on the left side. The right side of the query transform which is now blank is the place where we have to perform our operations (figure 4.20).

Figure 4.20

8. Select and drag all the fields from tblEmployee to the right and also only the EMPSALARY field from the tblEmployeeSalary table (as per the requirement we need all the fields from tblEmployee along with salary field). And then click on the “WHERE” tab below the fields area. Here, we have to specify the join condition just like normal SQL query. The join condition here is “tblEmployee.EMP_NO = tblEmployeeSalary.EMP_NO”. To do this you do not have to type the field names. Just click on ‘EMP_NO’ field in the table tblEmployee on the left pane and drag it to the empty space inside “WHERE” tab.Figure 4.21 shows the activities discussed here.

Figure 4.21

9. Our next task is to add a new field to the output ‘EmpLastname’. To do this we need to right click on the right hand side pane of the query transform, on any of the existing fields (preferably the last field which is EMPSALARY in this case).Select the option ‘New Output Column’ (figure 4.22).

Figure 4.22

10. A window would be popped up as seen in figure 4.23, select “Insert Below”.

Figure 4.23

11. Another window is displayed to initialize and set the properties of the new field which we are going to create. Figure 4.24 shows the window with values entered as required for our scenario. Ignore all the other tabs for now and just enter the “Name” ,”Data type” and “Length” as required.

Figure 4.24

12. Click on “OK” and now we would be able to see the field on the right hand side pane of the query transform (Figure 4.25).

Figure 4.25

13. After we have added the ‘EMPLASTNAME’ field to the target side, we can check if syntactically we stand correct so far, and we do this using the “Validate” button. At this point by looking at the screen we can recognize, we are not yet done with the field mappings as the blue triangle is missing in the “EMPLASTNAME’ field. But just to understand the functionality of the “Validate” let’s click on it (figure 4.26).

Figure 4.25

14. As expected we get the error message box clearly indicating that the field ‘EMPLASTNAME’ is not mapped. To do that we have we have to syntactically correct it by completing the mapping of that field. For this, go to the “Mapping” tab and in the empty space of the Mapping tab and type in two single quotes ‘’ .By two single quotes we assign the default value of the field as blank (which is as per the requirement).Now again validate the and we can see there are no errors .Next click on the “Back” button to go back to the dataflow (figure 4.27).

Figure 4.28

15. Now we have completed three of our four tasks. Next we have to create our target table to get the final output. Here, I am going to create another datastore for a database residing on another server. So, the effect of this job would be like an actual data migration or and ETL where we extract tables from one database and transform it in BODS, add fields and load it into another database. The datastore I have created for target table is DS_OUTPUT1 (I am not repeating the datastore creation steps).There are two ways we can add an output table to our transformations.

  1. Click on the “Template tables” icon on the Local Object Library under the appropriate datastore. Drag and drop it to the workspace.
  2. Select the “Template table” option from the tool bar on the right side of the workspace and click anywhere on the Designer workspace. A new dialog box would popup, enter the name of the output table as required and select the appropriate datastore under which the table needs to be created.

Here, I would go with method number ‘2’. Figure 4.29 shows this action step.

Figure 4.29

16.Here we can give any name to the output table that is being created, I name it “tblOutput” and select the correct datastore in which the table needs to be created, from the drop down “In datastore”. Then click “OK’.

17. We can now see the table that we created in the workspace. Join the table to the Query transform as we did to the other two tables. After doing that validate your job again. You would see there are no errors and now we are good to test run our job (figure 4.30).

Figure 4.30

Executing your BODS job

The job has been created and checked for syntax errors and we see that job is ready for execution. Before execution there is one more object we can quickly brief through, the Script object. Let’s add a script to our job and display the message “Hello World”. Without “Hello world” any programming looks incomplete. Figure 4.31 shows how to add script.

1. Click on the “Job_Test” icon on project area to navigate back up or you can do this using “Back” button also.

2. Click on the “Script” button on the toolbar and then click on any space on the designer workspace. You can see the script has been placed on our workspace.

3. Rename it to “SCR_MSG” and then connect it to the workflow “WF_TEST1”.

All these steps are shown in the figure 4.31.

Figure 4.31

4. Double click on ‘SCR_MSG’ to open the script.

Figure 4.32

6. Press the “Back” button or click on the Job icon to go back to the job level. Validate the job again to make sure everything is syntactically correct. Save the work you have done so far. Click on ‘Save All’ button on the toolbar on the top (shown in figure 4.33).

7. Now execute the job. This can be done in many ways

  1. Right click on the Job_Test on the project area and select ‘Execute’.
  2. Click on the ‘Execute’ icon on the upper tool bar or press F8.

Figure 4.33

8. Either ways we execute, we would get a screen as shown in figure 4.34. For now just ignore all of the check boxes and tabs that we see on the screen and click ‘OK’.

Figure 4.34

We can see the name of the Job server that we had created for this repository.

9. Once we press ‘OK’ the job begins execution and we can see the message Hello World and towards the end there is a message ‘Job is completed sucessfuly’ Both these have been indicted in figure 4.35.

Figure 4.35

10. Congrats!! you have now successfully executed your first BODS job.

11. After execution of our job we can see the number of records that were processed. This can be done by clicking on the monitor icon (shown in figure 4.36) on top of the screen. At alter stage if you want to review or re-visit you old job execution details, then click on the Monitor tab in the Project area (shown in figure 4.36) and you can see all the jobs you have executed. Click on the job you would like see the status of. Few things to keep in mind are-

  1. In the Project area Monitor tab if you see a small red light as in this case it means the job was executed successfully.
  2. In the Project area Monitor tab if you see a Big red cross (we would see this next) it means the job was executed with errors.
  3. In the Project area Monitor tab if you see a green light it means the job is still being executed.

Figure 4.36

12. Our next most important activity is check if the data has been actually migrated to the target table and if that has been migrated as expected. To do this, go in to the dataflow by clicking on the dataflow in Project Area. Next click on the lens like icon on the target table ‘TBLOUTPUT’ and there we see the output data as required.

Figure 4.37

Now, if you cross verify this against our requirements we can very well see that the output matches the expectations. This table can also be viewed at the database backend also by querying the table.

Debugging a BODS job

We now know how to create and execute a job; our next task is debugging the job that has errors. For this lets create an error in our job. Detach the link between our script SCR_MSG and workflow WF_TEST1.Save and execute the job. You can see that the job stopped execution and a Red indicator lighted up on the top of execution window (Figure 4.38). Click on the red button and you can see the error clearly listed.

Figure 4.38

You can test it further with creating your own errors and debugging. Some errors may not be very easy to understand this would become more and more understandable with experience and practice.

Do’s and Don’ts and Programmer tips

1. Do not copy paste dataflows if you want to re-use the same dataflow with modifications. Dataflows should be replicated only.

2. To re-use the dataflow without any modification, select it from the Local object library and drag and drop it to the designer work space.

3. Do not rename tables or copy paste target tables once created. Should you rename a table delete the table entirely from the Local object library and re-create it.

4. In a dataflow once a target table is created, we cannot have a starting point from that table. That target table would need to be used as a source in next dataflow.

5. There is no predefined or hard and fast rule while placing or arranging the transforms /tables on the workspace with respect to the layout. It recommended to arrange the objects neatly aligned in a workspace. For example, avoid jerks while connecting two objects which are on a straight line.

6. Keep saving you job at each step. BODS like any other software can crash any time and you may lose your work done that far.

7. ATL – The BODS program dump is called ATL. This is same like an ‘.exe’.


Have a question on this subject?

Ask questions to our expert community members and clear your doubts. Asking question or engaging in technical discussion is both easy and rewarding.

Are you on Twitter?

Start following us. This way we will always keep you updated with what's happening in Data Analytics community. We won't spam you. Promise.

  • How to handle multiple input file in SAP Data Services

    In this tutorial, we will learn how we can use a single data flow to process multiple input files of same structure using SAP Data Services. Like before, we have included one hands-on video with the tutorial to help you see the technique in action

  • Data Services Scripting Language

    We can use the Data Services Scripting Language to write scripts and custom functions to address complex logical expressions.

  • Getting Started SAP BODS

    This article is a step by step guide to learn the basic of SAP BODS. Starting from the Basic we will cover the essential topics like SCD implementation, Fact loading, CDC Mechanisms, Persistent Cache, Substitution Parameters, Variables and...

  • Data Services Scenario Questions Part 6

    In this tutorial we will discuss some scenario based questions and their solutions using SAP Data Services. This article is meant mainly for Data Services beginners.

  • Auditing in SAP Data Services

    A proper data reconciliation process must be in place in any data Extraction-Transformation-Load (ETL) process. A successful reconciliation process should only indicate whether or not the data is correct. But data reconciliation is not easy....

  • Web Service Call in SAP Data Services

    This article demonstrates, how to implement external web services call in SAP Data Services.

  • Data Services Scenario Questions Part 2

    In this tutorial we will discuss some scenario based questions and their solutions using SAP Data Services. This article is meant mainly for Data Services beginners.

  • RANK in Data Services

    In this article, we will learn how to implement RANK operation in SAP Data Services.

  • Working with Data Services Flatfiles

    In this article we will try to cover all the possible scenarios related to flatfiles in SAP Data Services.

  • DENSE RANK in Data Services

    In this article, we will learn how to implement DENSE RANK operation in SAP Data Services.