Why our brains see the things what we want it to see?-which may be divorced from reality.

Fascinating article by Daniel Yan in Aeon on how our brain see the things based upon our preformed expectations. f.sheikh 

The Book of Days (1864) by the Scottish author Robert Chambers reports a curious legal case: in 1457 in the town of Lavegny, a sow and her piglets were charged and tried for the murder of a partially eaten small child. After much deliberation, the court condemned the sow to death for her part in the act, but acquitted the naive piglets who were too young to appreciate the gravity of their crimes.

Subjecting a pig to a criminal trial seems perverse through modern eyes, since many of us believe that humans possess an awareness of actions and outcomes that separates us from other animals. While a grazing pig might not know what it is chewing, human beings are surely abreast of their actions and alert to their unfolding consequences. However, while our identities and our societies are built on this assumption of insight, psychology and neuroscience are beginning to reveal how difficult it is for our brains to monitor even our simplest interactions with the physical and social world. In the face of these obstacles, our brains rely on predictive mechanisms that align our experience with our expectations. While such alignments are often useful, they can cause our experiences to depart from objective reality – reducing the clear-cut insight that supposedly separates us from the Lavegny pigs.

One challenge that our brains face in monitoring our actions is the inherently ambiguous information they receive. We experience the world outside our heads through the veil of our sensory systems: the peripheral organs and nervous tissues that pick up and process different physical signals, such as light that hits the eyes or pressure on the skin. Though these circuits are remarkably complex, the sensory wetware of our brain possesses the weaknesses common to many biological systems: the wiring is not perfect, transmission is leaky, and the system is plagued by noise – much like how the crackle of a poorly tuned radio masks the real transmission.

But noise is not the only obstacle. Even if these circuits transmitted with perfect fidelity, our perceptual experience would still be incomplete. This is because the veil of our sensory apparatus picks up only the ‘shadows’ of objects in the outside world. To illustrate this, think about how our visual system works. When we look out on the world around us, we sample spatial patterns of light that bounce off different objects and land on the flat surface of the eye. This two-dimensional map of the world is preserved throughout the earliest parts of the visual brain, and forms the basis of what we see. But while this process is impressive, it leaves observers with the challenge of reconstructing the real three-dimensional world from the two-dimensional shadow that has been cast on its sensory surface.

Thinking about our own experience, it seems like this challenge isn’t too hard to solve. Most of us see the world in 3D. For example, when you look at your own hand, a particular 2D sensory shadow is cast on your eyes, and your brain successfully constructs a 3D image of a hand-shaped block of skin, flesh and bone. However, reconstructing a 3D object from a 2D shadow is what engineers call an ‘ill-posed problem’ – basically impossible to solve from the sampled data alone. This is because infinitely many different objects all cast the same shadow as the real hand. How does your brain pick out the right interpretation from all the possible contenders?

Perception is difficult because two different objects can cast the same ‘shadow’ on your sensory system. Your brain could solve this problem by relying on what it already knows about the size and shape of things like hands.

The second challenge we face in effectively monitoring our actions is the problem of pace. Our sensory systems have to depict a rapid and continuous flow of incoming information. Rapidly perceiving these dynamic changes is important even for the simplest of movements: we will likely end up wearing our morning coffee if we can’t precisely anticipate when the cup will reach our lips. But, once again, the imperfect biological machinery we use to detect and transmit sensory signals makes it very difficult for our brains to quickly generate an accurate picture of what we’re doing. And time is not cheap: while it takes only a fraction of a second for signals to get from the eye to the brain, and fractions more to use this information to guide an ongoing action, these fractions can be the difference between a dry shirt and a wet one.

Psychologists and neuroscientists have long wondered what strategies our brains might use to overcome the problems of ambiguity and pace. There is a growing appreciation that both challenges could be overcome using prediction. The key idea here is that observers do not simply rely on the current input coming in to their sensory systems, but combine it with ‘top-down’ expectations about what the world contains.

Machine Learning confronts elephant in the room-by Kevin Hartnet

Score one for the human brain. In a new study, computer scientists found that artificial intelligence systems fail a vision test a child could accomplish with ease.

“It’s a clever and important study that reminds us that ‘deep learning’ isn’t really that deep,” said Gary Marcus, a neuroscientist at New York University who was not affiliated with the work.

The result takes place in the field of computer vision, where artificial intelligence systems attempt to detect and categorize objects. They might try to find all the pedestrians in a street scene, or just distinguish a bird from a bicycle (which is a notoriously difficult task). The stakes are high: As computers take over critical tasks like automated surveillance and autonomous driving, we’ll want their visual processing to be at least as good as the human eyes they’re replacing.

It won’t be easy. The new work accentuates the sophistication of human vision — and the challenge of building systems that mimic it. In the study, the researchers presented a computer vision system with a living room scene. The system processed it well. It correctly identified a chair, a person, books on a shelf. Then the researchers introduced an anomalous object into the scene — an image of an elephant. The elephant’s mere presence caused the system to forget itself: Suddenly it started calling a chair a couch and the elephant a chair, while turning completely blind to other objects it had previously seen.

“There are all sorts of weird things happening that show how brittle current object detection systems are,” said Amir Rosenfeld, a researcher at York University in Toronto and co-author of the study along with his York colleague John Tsotsos and Richard Zemel of the University of Toronto.

Researchers are still trying to understand exactly why computer vision systems get tripped up so easily, but they have a good guess. It has to do with an ability humans have that AI lacks: the ability to understand when a scene is confusing and thus go back for a second glance.

The Elephant in the Room

Eyes wide open, we take in staggering amounts of visual information. The human brain processes it in stride. “We open our eyes and everything happens,” said Tsotsos.

Artificial intelligence, by contrast, creates visual impressions laboriously, as if it were reading a description in Braille. It runs its algorithmic fingertips over pixels, which it shapes into increasingly complex representations. The specific type of AI system that performs this process is called a neural network. It sends an image through a series of “layers.” At each layer, the details of the image — the colors and brightnesses of individual pixels — give way to increasingly abstracted descriptions of what the image portrays. At the end of the process, the neural network produces a best-guess prediction about what it’s looking at.

“It’s all moving from one layer to the next by taking the output of the previous layer, processing it and passing it along to the next layer, like a pipeline,” said Tsotsos.

Graphic: Deep neural networks learn by adjusting the strengths of their connections to better convey input signals through multiple layers to neurons associated with the right general concepts. When data is fed into a network, each artificial neuron that fires (labeled “1”) transmits signals to certain neurons in the next layer, which are likely to fire if multiple signals are received. The process filters out noise and retains only the most relevant features.
posted by f.sheikh

The ‘father of information theory’, Claude Shannon brought us our digital world

Worth watching video below how a mathematician who loved playing black jack, thought about  simple 0-1, yes-no, on-off concept and brought us digital age. ( fayyaz sheikh)

If 100 years ago futurists were imagining things that were not so different from Skype-like global communications technologies and wonders such as a device that could encompass all the instruments of an orchestra, they did so on distinctly analogue lines. What no one foresaw, however, was that a single system would underpin nearly every innovation of the coming information revolution. Enter Claude Shannon, the Massachusetts Institute of Technology-educated mathematician who solved the communication problem that early 20th century thinkers didn’t even know we had.

Click here to access video

Chronology of Homo Species

Wequar Azeem Sahib gave excellent talk on origin of life, evolution of human beings and brain. Dr. Shoeb has shared link below which shows chronology of Homo Species.

Click Here