Normalizer transformation is a native transformation in Informatica that can ease many complex data transformation requirements. Learn how to effectively use normalizer in this tutorial.

### What is a Noramalizer Transformation?

In a snapshot, here is what a Normalizer is or does:

• Active Transformation
• Can output multiple rows for each input row
• Can transpose the data (transposing columns to rows)

A Normalizer is an Active transformation that returns multiple rows from a source row, it returns duplicate data for single-occurring source columns. The Normalizer transformation parses multiple-occurring columns from COBOL sources, relational tables, or other sources. Normalizer can be used to transpose the data in columns to rows.

Normalizer effectively does the opposite of what Aggregator does!

### Transposing data using Normalizer

Let's imagine we have a table like below that stores the sales figure for 4 quarters of a year in 4 different columns. As you can see each row represent one shop and the columns represent the corresponding sales. Next, imagine - our task is to generate a result-set where we will have separate rows for every quarter. We can configure a Normalizer transformation to return a separate row for each quarter like below..

The following source rows contain four quarters of sales by store:

Source Table:

Store Quarter1 Quarter2 Quarter3 Quarter4
Shop 1 100 300 500 700
Shop 2 250 450 650 850

The Normalizer returns a row for each shop and sales combination. It also returns an index - called GCID (we will know later in detail) - that identifies the quarter number:

Target Table:

Shop Sales Quarter
Shop 1 100 1
Shop 1 300 2
Shop 1 500 3
Shop 1 700 4
Shop 2 250 1
Shop 2 450 2
Shop 2 650 3
Shop 2 850 4

### How to use Normalizer transformation inside Informatica Mapping

Now that you know the concept of a normalizer, let's see how we can implement this concept using Normalizer transformation. We will take a different data set for our example this time. Suppose we have the following data in source:

Name Month Transportation House Rent Food
Sam Jan 200 1500 500
John Jan 300 1200 300
Tom Jan 300 1350 350
Sam Feb 300 1550 450
John Feb 350 1200 290
Tom Feb 350 1400 350

and we need to transform the source data and populate this as below in the target table:

Name Month Expense Type Expense
Sam Jan Transport 200
Sam Jan House rent 1500
Sam Jan Food 500
John Jan Transport 300
John Jan House rent 1200
John Jan Food 300
Tom Jan Transport 300
Tom Jan House rent 1350
Tom Jan Food 350

Now below is the screen-shot of a complete mapping which shows how to achieve this result using Informatica PowerCenter Designer.

Please click on the above image to enlarge it. You can see after the Source Qualifier, we have placed the Normalizer transformation. In the next section, I will explain how to set up the properties of the normalizer.

### Setting Up Normalizer Transformation Property

First we need to set the number of occurrences property of the Expense head as 3 in the Normalizer tab of the Normalizer transformation. This is because we have 3 different types of expenses in the given data - Food, Houserent and Transportation.

As soon as we set the occurrences to 3, Normalizer will in turn automatically create 3 corresponding input ports in the ports tab along with the other fields (e.g. "Individual" and "Month" fields). These 3 input ports, as you can see in the above image, are EXPENSEHEAD_in1, EXPENSEHEAD_in2, EXPENSEHEAD_in3. We have connected these input ports with food, house rent and transportation from the source qualifier. Below image shows the setting up of number of occurrences property.

Next, In the Ports tab of the Normalizer the ports will be created automatically as configured in the Normalizer tab.

But, Interestingly we will observe two new columns here. They are ,

See these ports in the below screen shot. Again, if you need - please click on the image to enlarge it

GK field generates sequence number starting from the value as defined in Sequence field while GCID holds the value of the occurrence field i.e. the column no of the input Expense head.

In our case, 1 is for FOOD, 2 is for HOUSERENT and 3 is for TRANSPORTATION. Now the GCID will give which expense corresponds to which field while converting columns to rows.

Below is the screen-shot of the expression to handle this GCID efficiently:

As you can see above, the DECODE statement is used to assign proper level to the output expense head field

#### Are you able to solve this?

Which one is not an SQL Join?

### Popular

#### Have a question on this subject?

Ask questions to our expert community members and clear your doubts. Asking question or engaging in technical discussion is both easy and rewarding.

Start following us. This way we will always keep you updated with what's happening in Data Analytics community. We won't spam you. Promise.

• Aggregation with out Informatica Aggregator

Since Informatica process data on row by row basis, it is generally possible to handle data aggregation operation even without an Aggregator Transformation. On certain cases, you may get huge performance gain using this technique!

• The benefit and disadvantage of Informatica Persistent Cache Lookup

Persistent cache may be your choice of caching when it comes to lookup performance. But you should be aware of the hazards of persistent cache as well.

• Best Informatica Interview Questions & Answers

Welcome to the finest collection of Informatica Interview Questions with standard answers that you can count on. Read and understand all the questions and their answers below and in the following pages to get a good grasp in Informatica. If you...

• Implementing SCD2 in Informatica Using ORA_HASH at Source

In this article we shall see how we can implement SCD type2 in Informatica using ORA_HASH, which is an ORACLE function that computes hash value for a given expression. We can use this feature to find the existence of any change in any of the SCD...

• CDC Implementation using Flatfile

This article shows how to use a flatfile to implement Change data Capture. Suppose we want to maintain the last extraction date in a flatfile, based on that value we want to capture the changed data of our business table.

• Comparing Performance of SORT operation (Order By) in Informatica and Oracle

In this "DWBI Concepts' Original article", we put Oracle database and Informatica PowerCentre to lock horns to prove which one of them handles data SORTing operation faster. This article gives a crucial insight to application developer in order to...

• Implementing Informatica Partitions

Identification and elimination of performance bottlenecks will obviously optimize session performance. After tuning all the mapping bottlenecks, we can further optimize session performance by increasing the number of pipeline partitions in the...

• Informatica Performance Tuning - Complete Guide

This article is a comprehensive guide to the techniques and methodologies available for tuning the performance of Informatica PowerCentre ETL tool. It's a one stop performance tuning manual for Informatica.

• Informatica Metadata Tables - Overview and Tutorial

Informatica PowerCentre stores all the information about mapping, session, transformation, workflow etc. in a set of database tables called metadata tables. While these tables are used internally by Informatica, one can get useful information by...