“Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.
Most data comparison algorithms today have one major weakness – somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t. In the era of Big Data however, experts aren’t keeping pace with the growing amounts and complexities of data.
Now, Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and without access to the data sources. Hod Lipson, associate professor of mechanical engineering and computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson and now at the University of Chicago, have described their method in Royal Society Interface.
Data smashing is based on a new way to compare data streams. The process involves two steps. First, the data streams are algorithmically “smashed” to “annihilate” the information in each other. Then, the process measures what information remained after the collision. The more information remained, the less likely the streams originated in the same source.
Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.
Any time a data mining algorithm searches beyond simple correlations, a human expert must help define a notion of similarity – by specifying important distinguishing “features” of the data to compare, or by training learning algorithms using copious amounts of examples. The data smashing principle removes the reliance on expert-defined features or examples, and in many cases, does so faster and with better accuracy than traditional methods according to Chattopadhyay.
Read more: http://www.33rdsquare.com/2014/10/data-smashing-used-to-find-hidden.html#ixzz3HfiDazDg
Posted BY F. Sheikh