The computing scientists main challenge is not to get confused by the complexities of (their) own making.
-- E.W. Dijkstra
Much data is redundant, noisy, or irrelevant. For example:
So one way to use less data is to only share a small number of prototypes; i.e. use fewer rows. Not only that, but we can use fewer columns:
That is, if we were so foolish as to try to build high-dimensional models, we would fail as the region where we can find related examples would become vanishingly small. Note that this is often called the curse of dimensionality.
Note this curse can also be a blessing:
There is much empirical evidence that just because a data set has n columns, we need not use them all. Numerous researchers have examined what happens when a data miner deliberately ignores some of the columns in the training data. For example, the experiments of Ron Kohavi and George John show that, on numerous real-world datasets, over 80% of columns can be ignored. Further, ignoring those columns doesn't degrade the learner's classification accuracy (in fact, it sometimes even results in small improvements).
Further, if we combine both prototype and column selection, the net result can be a dramatic reduction in the complexity of the data:
The only way larger data sets can be summarized to smaller ones is if there is some superfluous details in the larger set. Hence, before we can advocate such summarizations we must first offer a measure of data set simplicity and only summarize the simpler data. The next figure offers intrinsic dimensionality as such a measure and applies it to 10 data sets with 21 columns of data. As shown in that figure, the intrinsic dimensionality of our data sets can be very small indeed. It is hardly surprising that such an intrinsically low-dimensional data set can be summaried in half a dozen columns and a few dozen rows.
Mail: Com.Sci., 890 Oval Dr, Raleigh, NC, USA, 27695-8206.
Oct 11: Invited to the MSR'19 program committee
Sept 30: Paper accepted to NL4SE;: 'Total Recall for SE' More...
Sept 23: New award: $40K faculty award from IBM
Sept 6: Paper accepted, IEEE TSE, 'Finding Trends in SE Research' More...
Aug 27: Paper accepted, IEEE TSE, 'Finding Faster Configurations with FLASH' More...
July 25: New pre-print, IEEE Software "Software Analytics, What's next?" More...
July 23: New funding: $110K from Lexis Nexis. Cloud Computing Config and Test prioritization.
July 21: Total career funding rises to $10M.
July 19: My 2007 IEEE TSE paper tops 1000 citations. More...
July 18: Invited to speak at Code Freeze, January, 2019.