Solution For Complexities Of Big Data-Data Smashing

“Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.

Most data comparison algorithms today have one major weakness – somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t. In the era of Big Data however, experts aren’t keeping pace with the growing amounts and complexities of data.

Now, Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and without access to the data sources. Hod Lipson, associate professor of mechanical engineering and computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson and now at the University of Chicago, have described their method in Royal Society Interface.
Data smashing is based on a new way to compare data streams. The process involves two steps. First, the data streams are algorithmically “smashed” to “annihilate” the information in each other. Then, the process measures what information remained after the collision. The more information remained, the less likely the streams originated in the same source.

Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.

Any time a data mining algorithm searches beyond simple correlations, a human expert must help define a notion of similarity – by specifying important distinguishing “features” of the data to compare, or by training learning algorithms using copious amounts of examples. The data smashing principle removes the reliance on expert-defined features or examples, and in many cases, does so faster and with better accuracy than traditional methods according to Chattopadhyay.

Read more: http://www.33rdsquare.com/2014/10/data-smashing-used-to-find-hidden.html#ixzz3HfiDazDg
Posted BY F. Sheikh

Mathematics of Roughness

By Jim Holt

Benoit Mandelbrot, the brilliant Polish-French-American mathematician who died in 2010, had a poet’s taste for complexity and strangeness. His genius for noticing deep links among far-flung phenomena led him to create a new branch of geometry, one that has deepened our understanding of both natural forms and patterns of human behavior. The key to it is a simple yet elusive idea, that of self-similarity.holt_1-052313.jpg

To see what self-similarity means, consider a homely example: the cauliflower. Take a head of this vegetable and observe its form—the way it is composed of florets. Pull off one of those florets. What does it look like? It looks like a little head of cauliflower, with its own subflorets. Now pull off one of those subflorets. What does that look like? A still tinier cauliflower. If you continue this process—and you may soon need a magnifying glass—you’ll find that the smaller and smaller pieces all resemble the head you started with. The cauliflower is thus said to be self-similar. Each of its parts echoes the whole.

Other self-similar phenomena, each with its distinctive form, include clouds, coastlines, bolts of lightning, clusters of galaxies, the network of blood vessels in our bodies, and, quite possibly, the pattern of ups and downs in financial markets. The closer you look at a coastline, the more you find it is jagged, not smooth, and each jagged segment contains smaller, similarly jagged segments that can be described by Mandelbrot’s methods. Because of the essential roughness of self-similar forms, classical mathematics is ill-equipped to deal with them. Its methods, from the Greeks on down to the last century, have been better suited to smooth forms, like circles. (Note that a circle is not self-similar: if you cut it up into smaller and smaller segments, those segments become nearly straight.)

Only in the last few decades has a mathematics of roughness emerged, one that can get a grip on self-similarity and kindred matters like turbulence, noise, clustering, and chaos. And Mandelbrot was the prime mover behind it. He had a peripatetic career, but he spent much of it as a researcher for IBM in upstate New York. In the late 1970s he became famous for popularizing the idea of self-similarity, and for coining the word “fractal” (from the Latin fractus, meaning broken) to designate self-similar forms. In 1980 he discovered the “Mandelbrot set,” whose shape—it looks a bit like a warty snowman or beetle—came to represent the newly fashionable science of chaos. What is perhaps less well known about Mandelbrot is the subversive work he did in economics. The financial models he created, based on his fractal ideas, implied that stock and currency markets were far riskier than the reigning consensus in business schools and investment banks supposed, and that wild gyrations—like the 777-point plunge in the Dow on September 29, 2008—were inevitable.Full article on link below!

http://www.nybooks.com/articles/archives/2013/may/23/mandlebrot-mathematics-of-roughness/

 

A MOST PROFOUND MATH PROBLEM

pnp-580.jpg

On August 6, 2010, a computer scientist named Vinay Deolalikar published a paper with a name as concise as it was audacious: “P ≠ NP.” If Deolalikar was right, he had cut one of mathematics’ most tightly tied Gordian knots. In 2000, the P = NP problem was designated by the Clay Mathematics Institute as one of seven Millennium Problems—“important classic questions that have resisted solution for many years”—only one of which has been solved since. (The Poincaré Conjecture was vanquished in 2003 by the reclusive Russian mathematicianGrigory Perelman, who refused the attached million-dollar prize.)

 

A few of the Clay problems are long-standing head-scratchers. The Riemann hypothesis, for example, made its debut in 1859. By contrast, P versus NP is relatively young, having been introduced by the University of Toronto mathematical theorist Stephen Cook in 1971, in a paper titled “The complexity of theorem-proving procedures,” though it had been touched upon two decades earlier in a letter by Kurt Gödel, whom David Foster Wallace branded “modern math’s absolute Prince of Darkness.” The question inherent in those three letters is a devilish one: Does P (problems that we can easily solve) equal NP (problems that we can easily check)?

Take your e-mail password as an analogy. Its veracity is checked within a nanosecond of your hitting the return key. But for someone to solve your password would probably be a fruitless pursuit, involving a near-infinite number of letter-number permutations—a trial and error lasting centuries upon centuries. Deolalikar was saying, in essence, that there will always be some problems for which we can recognize an answer without being able to quickly find one—intractable problems that lie beyond the grasp of even our most powerful microprocessors, that consign us to a world that will never be quite as easy as some futurists would have us believe. There always will be problems unsolved, answers unknown.Click link to read full article;

http://www.newyorker.com/online/blogs/elements/2013/05/a-most-profound-math-problem.html

( Posted by F. Sheikh)

Different forms of mathematical thought

Posted by Noor Salik

Different forms of mathematical thought
One makes the distinction in mathematics between:
(i) Continuous thinking (for example real numbers and limits), and
(ii) Discrete thinking (for example natural numbers and number theory).
Experience shows that continuous problems are often easier to treat than discrete ones.
The great successes of the continuous way of thinking are based on the notion of limits
and the theories connected with this notion (calculus, differential equations, integral
equations and the calculus of variations) with diverse applications in physics and other
natural sciences.

In contrast, number theory is the prototype for the creation of effective mathematical
methods for treating discrete problems, arising in today’s world in computer science,
optimization of discrete systems and lattice models in theoretical physics for studying
elementary particles and strings.

The epochal discovery by Max Plank in 1900 that the energy of the harmonic oscillator
is not continuous but rather discrete (quantized), led to the important mathematical
problem of generating discrete structures from continuous ones by an appropriate,
non-trivial quantization process.

Oxford Users’ Guide to Mathematics