Jump to content
Business Intelligence & Analytics Community

Saurabh Jain

Standardization vs. normalization ?

Recommended Posts

In the overall knowledge discovery process, before data mining itself, data preprocessing plays a crucial role. One of the first steps concerns the normalization of the data. This step is very important when dealing with parameters of different units and scales. For example, some data mining techniques use the Euclidean distance. Therefore, all parameters should have the same scale for a fair comparison between them.


Two methods are usually well known for rescaling data. Normalization, which scales all numeric variables in the range [0,1]. One possible formula is given below:

norm.png.734e9f7c5ed4fd68a5b5b9a74309af80.png


On the other hand, you can use standardization on your data set. It will then transform it to have zero mean and unit variance, for example using the equation below:

stand.png.8afcd54950adc5134383de8bb625ebc4.png


Both of these techniques have their drawbacks. If you have outliers in your data set, normalizing your data will certainly scale the “normal” data to a very small interval. And generally, most of data sets have outliers. When using standardization, your new data aren’t bounded (unlike normalization).


So my question is what do you usually use when mining your data and why?
 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now





×