Data Cleaning Process

·

2 min read

Data cleaning attempt to fill in missing values, smooth out noise while identifying outliers, and correct inconsistencies in the data. Data cleaning is performed as an iterative two-step process consisting of discrepancy detection and data transformation.

The missing values of the attribute can be addressed by

  • Ignoring the value
  • filling the value manually
  • using global constant to fill the value
  • using a measure of central tendency (mean or median) of value
  • using attribute mean or median belonging to same class
  • using the most probable value

Noise is a random error or variance in a measured variable. The noisy data can be smoothened using following techniques.

Binning methods smooth a sorted data value by consulting the nearby values around it.

smoothing by bin means - each value in a bin is replaced by the mean value of the bin.

smoothing by bin medians - each bin value is replaced by the bin median

smoothing by bin boundaries - the minimum and maximum values in a given bin are identified as the bin boundaries.

Data smoothing can also be done by regression, a technique that conforms data values to a function.

Outliers may be detected by clustering. The values that fall outside of the set of clusters may be considered outliers.

The first step in data cleaning as a process is discrepancy detection.

A unique rule says that each value of the given attribute must be different from all other values for that attribute.

A consecutive rule says that there can be no missing values between the lowest and highest values for the attribute, and that all values must also be unique.

A null rule specifies the use of blanks, question marks, special characters, or other strings that may indicate the null condition and how such values should be handled.

Data scrubbing tools use simple domain knowledge to detect errors and make corrections in the data.

Data auditing tools find discrepancies by analyzing the data to discover rules and relationships, and detecting data that violate such conditions.

Data migration tools allow simple transformations to be specified such as to replace the string “gender” by “sex.”

ETL (extraction/transformation/loading) tools allow users to specify transforms through a graphical user interface (GUI).

Potter’s Wheel, is a publicly available data cleaning tool that integrates discrepancy detection and transformation.


Did you find this article valuable?

Support The Data Ilm by becoming a sponsor. Any amount is appreciated!