The Biggest Mystery in Mathematics

A Japanese mathematician claims to have solved one of the most important problems in his field. The trouble is, hardly anyone can work out whether he’s right.

Sometime on the morning of 30 August 2012, Shinichi Mochizuki quietly posted four papers on his website.

The papers were huge — more than 500 pages in all — packed densely with symbols, and the culmination of more than a decade of solitary work. They also had the potential to be an academic bombshell. In them, Mochizuki claimed to have solved the abc conjecture, a 27-year-old problem in number theory that no other mathematician had even come close to solving. If his proof was correct, it would be one of the most astounding achievements of mathematics this century and would completely revolutionize the study of equations with whole numbers.

Mochizuki, however, did not make a fuss about his proof. The respected mathematician, who works at Kyoto University’s Research Institute for Mathematical Sciences (RIMS) in Japan, did not even announce his work to peers around the world. He simply posted the papers, and waited for the world to find out.

Probably the first person to notice the papers was Akio Tamagawa, a colleague of Mochizuki’s at RIMS. He, like other researchers, knew that Mochizuki had been working on the conjecture for years and had been finalizing his work. That same day, Tamagawa e-mailed the news to one of his collaborators, number theorist Ivan Fesenko of the University of Nottingham, UK. Fesenko immediately downloaded the papers and started to read. But he soon became “bewildered”, he says. “It was impossible to understand them.”

Fesenko e-mailed some top experts in Mochizuki’s field of arithmetic geometry, and word of the proof quickly spread. Within days, intense chatter began on mathematical blogs and online forums (see Nature http://doi.org/725; 2012). But for many researchers, early elation about the proof quickly turned to scepticism. Everyone — even those whose area of expertise was closest to Mochizuki’s — was just as flummoxed by the papers as Fesenko had been. To complete the proof, Mochizuki had invented a new branch of his discipline, one that is astonishingly abstract even by the standards of pure maths. “Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” number theorist Jordan Ellenberg, of the University of Wisconsin–Madison, wrote on his blog a few days after the paper appeared.

Three years on, Mochizuki’s proof remains in mathematical limbo — neither debunked nor accepted by the wider community. Mochizuki has estimated that it would take a maths graduate student about 10 years to be able to understand his work, and Fesenko believes that it would take even an expert in arithmetic geometry some 500 hours. So far, only four mathematicians say that they have been able to read the entire proof.

Adding to the enigma is Mochizuki himself. He has so far lectured about his work only in Japan, in Japanese, and despite being fluent in English, he has declined invitations to talk about it elsewhere. He does not speak to journalists; several requests for an interview for this story went unanswered. Mochizuki has replied to e-mails from other mathematicians and been forthcoming to colleagues who have visited him, but his only public input has been sporadic posts on his website. In December 2014, he wrote that to understand his work, there was a “need for researchers to deactivate the thought patterns that they have installed in their brains and taken for granted for so many years”. To mathematician Lieven Le Bruyn of the University of Antwerp in Belgium, Mochizuki’s attitude sounds defiant. “Is it just me,” he wrote on his blog earlier this year, “or is Mochizuki really sticking up his middle finger to the mathematical community”.

Now, that community is attempting to sort the situation out. In December, the first workshop on the proof outside of Asia will take place in Oxford, UK. Mochizuki will not be there in person, but he is said to be willing to answer questions from the workshop through Skype. The organizers hope that the discussion will motivate more mathematicians to invest the time to familiarize themselves with his ideas — and potentially move the needle in Mochizuki’s favour.

In his latest verification report, Mochizuki wrote that the status of his theory with respect to arithmetic geometry “constitutes a sort of faithful miniature model of the status of pure mathematics in human society”. The trouble that he faces in communicating his abstract work to his own discipline mirrors the challenge that mathematicians as a whole often face in communicating their craft to the wider world.

Primal importance

The abc conjecture refers to numerical expressions of the type a + b = c. The statement, which comes in several slightly different versions, concerns the prime numbers that divide each of the quantities a, b and c. Every whole number, or integer, can be expressed in an essentially unique way as a product of prime numbers — those that cannot be further factored out into smaller whole numbers: for example, 15 = 3 × 5 or 84 = 2 × 2 × 3 × 7. In principle, the prime factors of a and bhave no connection to those of their sum, c. But the abc conjecture links them together. It presumes, roughly, that if a lot of small primes divide a and b then only a few, large ones divide c.

This possibility was first mentioned in 1985, in a rather off-hand remark about a particular class of equations by French mathematician Joseph Oesterlé during a talk in Germany. Sitting in the audience was David Masser, a fellow number theorist now at the University of Basel in Switzerland, who recognized the potential importance of the conjecture, and later publicized it in a more general form. It is now credited to both, and is often known as the Oesterlé–Masser conjecture.

“Looking at it, you feel a bit like you might be reading a paper from the future.”

A few years later, Noam Elkies, a mathematician at Harvard University in Cambridge, Massachusetts, realized that the abcconjecture, if true, would have profound implications for the study of equations concerning whole numbers — also known as Diophantine equations after Diophantus, the ancient-Greek mathematician who first studied them.

Elkies found that a proof of the abc conjecture would solve a huge collection of famous and unsolved Diophantine equations in one stroke. That is because it would put explicit bounds on the size of the solutions. For example, abc might show that all the solutions to an equation must be smaller than 100. To find those solutions, all one would have to do would be to plug in every number from 0 to 99 and calculate which ones work. Without abc, by contrast, there would be infinitely many numbers to plug in.

http://www.nature.com/news/the-biggest-mystery-in-mathematics-shinichi-mochizuki-and-the-impenetrable-proof-1.18509

Posted by f.sheikh

“Am I the only around here ” By Carl Pierer

( Pigeonhole Principle )

This meme is taken from a scene in the Cohen brother’s 1998 comedy “The Big Lebowski”. During a game of bowling, Walter, in the picture, gets annoyed at the other characters constantly overstepping the line. Drawing a gun, he asks: “Am I the only around here who gives a shit about rules?”[ii]

OnlyOne-Skrillex

Considering that there are roughly 7 billion people on earth, a positive answer seems highly unlikely. But it is possible to do better. We can know with certainty, i.e. prove, that the creator of the meme is not the only one. This is a simple and straightforward application of a fascinating, intuitive and yet powerful mathematical principle. It is usually called “pigeonhole principle” (for reasons to be explained below) or “Dirichlet’s principle”.

There exist many formulations of the Dirichlet’s principle, but a very simple one is the following: Suppose you have n holes (where n is a positive integer) and n+1 pigeons. Now, no matter how hard you try, it is impossible to fit all the pigeons into individual holes. There is at least one hole that contains two (or more). Similarly for hairs. The numbers vary of course, but an average blonde person is thought to have about 150’000 hairs on their head[v]. To be on the safe side, let us assume that the hairiest person on earth has 300’000 hairs. For ease of calculation, let us further assume that there are 7 billion people on earth. Then, at least two people will have the same number of hairs. Indeed, at least 23’333 people will do.

To demonstrate the truth of this rather obscure claim, suppose it is false; this means it is not the case that at least two people will have the same number of hairs. Say, we put the 7 billion people into a row, starting with the person of 0 hairs to our left and running up to the Guinness World Record Hairiest person of 300’000 hairs. So, person 1 has 0 hairs, person 2 has exactly one hair, etc., up to person 300’001, who holds the Guinness World Record. Now what about person 300’002? Remember she has to have 0,1,…,300’000 hairs (otherwise the World Record would be broken yet again!). But all those numbers of hairs are already taken by person 1 up to 300’001, so she necessarily has the same number of hairs as someone of them.

Of course, this is a rather silly application, but the principle can be generalised (in semi-mathematical terms): If you have two sets S and T, where S contains more elements than T, there is no way of assigning a single element in T to each element in S.

So far, we have only considered finite sets, but what happens if either S or T (or both) are infinite? The pigeonhole principle still applies, and has implicitly been put to use in some of G. Cantor’s (1845-1918) most beautiful proofs, notably in his diagonal argument.

– See more at:

http://www.3quarksdaily.com/3quarksdaily/2014/10/page/2/#sthash.AAvvsvSc.dpuf

 Posted By F. Sheikh

Solution For Complexities Of Big Data-Data Smashing

“Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.

Most data comparison algorithms today have one major weakness – somewhere, they rely on a human expert to specify what aspects of the data are relevant for comparison, and what aspects aren’t. In the era of Big Data however, experts aren’t keeping pace with the growing amounts and complexities of data.

Now, Cornell computing researchers have come up with a new principle they call “data smashing” for estimating the similarities between streams of arbitrary data without human intervention, and without access to the data sources. Hod Lipson, associate professor of mechanical engineering and computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson and now at the University of Chicago, have described their method in Royal Society Interface.
Data smashing is based on a new way to compare data streams. The process involves two steps. First, the data streams are algorithmically “smashed” to “annihilate” the information in each other. Then, the process measures what information remained after the collision. The more information remained, the less likely the streams originated in the same source.

Just as smashing atoms can reveal their composition, “colliding” quantitative data streams can reveal their hidden structure,” writes Chattopadhyay.

Any time a data mining algorithm searches beyond simple correlations, a human expert must help define a notion of similarity – by specifying important distinguishing “features” of the data to compare, or by training learning algorithms using copious amounts of examples. The data smashing principle removes the reliance on expert-defined features or examples, and in many cases, does so faster and with better accuracy than traditional methods according to Chattopadhyay.

Read more: http://www.33rdsquare.com/2014/10/data-smashing-used-to-find-hidden.html#ixzz3HfiDazDg
Posted BY F. Sheikh

Mathematics of Roughness

By Jim Holt

Benoit Mandelbrot, the brilliant Polish-French-American mathematician who died in 2010, had a poet’s taste for complexity and strangeness. His genius for noticing deep links among far-flung phenomena led him to create a new branch of geometry, one that has deepened our understanding of both natural forms and patterns of human behavior. The key to it is a simple yet elusive idea, that of self-similarity.holt_1-052313.jpg

To see what self-similarity means, consider a homely example: the cauliflower. Take a head of this vegetable and observe its form—the way it is composed of florets. Pull off one of those florets. What does it look like? It looks like a little head of cauliflower, with its own subflorets. Now pull off one of those subflorets. What does that look like? A still tinier cauliflower. If you continue this process—and you may soon need a magnifying glass—you’ll find that the smaller and smaller pieces all resemble the head you started with. The cauliflower is thus said to be self-similar. Each of its parts echoes the whole.

Other self-similar phenomena, each with its distinctive form, include clouds, coastlines, bolts of lightning, clusters of galaxies, the network of blood vessels in our bodies, and, quite possibly, the pattern of ups and downs in financial markets. The closer you look at a coastline, the more you find it is jagged, not smooth, and each jagged segment contains smaller, similarly jagged segments that can be described by Mandelbrot’s methods. Because of the essential roughness of self-similar forms, classical mathematics is ill-equipped to deal with them. Its methods, from the Greeks on down to the last century, have been better suited to smooth forms, like circles. (Note that a circle is not self-similar: if you cut it up into smaller and smaller segments, those segments become nearly straight.)

Only in the last few decades has a mathematics of roughness emerged, one that can get a grip on self-similarity and kindred matters like turbulence, noise, clustering, and chaos. And Mandelbrot was the prime mover behind it. He had a peripatetic career, but he spent much of it as a researcher for IBM in upstate New York. In the late 1970s he became famous for popularizing the idea of self-similarity, and for coining the word “fractal” (from the Latin fractus, meaning broken) to designate self-similar forms. In 1980 he discovered the “Mandelbrot set,” whose shape—it looks a bit like a warty snowman or beetle—came to represent the newly fashionable science of chaos. What is perhaps less well known about Mandelbrot is the subversive work he did in economics. The financial models he created, based on his fractal ideas, implied that stock and currency markets were far riskier than the reigning consensus in business schools and investment banks supposed, and that wild gyrations—like the 777-point plunge in the Dow on September 29, 2008—were inevitable.Full article on link below!

http://www.nybooks.com/articles/archives/2013/may/23/mandlebrot-mathematics-of-roughness/