The computing scientists main challenge is not to get confused by the complexities of (their) own making.
-- E.W. Dijkstra
Much data is redundant, noisy, or irrelevant. For example:
So one way to use less data is to only share a small number of prototypes; i.e. use fewer rows. Not only that, but we can use fewer columns:
That is, if we were so foolish as to try to build high-dimensional models, we would fail as the region where we can find related examples would become vanishingly small. Note that this is often called the curse of dimensionality.
Note this curse can also be a blessing:
There is much empirical evidence that just because a data set has n columns, we need not use them all. Numerous researchers have examined what happens when a data miner deliberately ignores some of the columns in the training data. For example, the experiments of Ron Kohavi and George John show that, on numerous real-world datasets, over 80% of columns can be ignored. Further, ignoring those columns doesn't degrade the learner's classification accuracy (in fact, it sometimes even results in small improvements).
Further, if we combine both prototype and column selection, the net result can be a dramatic reduction in the complexity of the data:
The only way larger data sets can be summarized to smaller ones is if there is some superfluous details in the larger set. Hence, before we can advocate such summarizations we must first offer a measure of data set simplicity and only summarize the simpler data. The next figure offers intrinsic dimensionality as such a measure and applies it to 10 data sets with 21 columns of data. As shown in that figure, the intrinsic dimensionality of our data sets can be very small indeed. It is hardly surprising that such an intrinsically low-dimensional data set can be summaried in half a dozen columns and a few dozen rows.
Mail: Com.Sci., 890 Oval Dr, Raleigh, NC, USA, 27695-8206.
Feb 22: Workshop accepted to HICSS'18: Frontiers of AI and SE
Feb 19: New: using bad learners to find good configurations More...
Feb 18: New: many models are controlled by a handful of key decisions More...
Jan 13: New gift from Lexis Nexis: $60K extension to SElab
Jan 10: Invited to speak, NSF PI meeting, Software Infrastructure for Sustained Innovation, Feb 21, 2017 More...
Jan 08: Invited to editorial board, Journal of Software and Systems
Dec 20: Journal paper accepted: Discovering API Methods thru Text Mining; Software:Evol+Process with Pandita et al.