Limits Of Science & Human Brain

(An other article on how far science can take us and limits of our brain to understand. How do we know we understand? Article is by Martin Rees, a professor of Cosmology and Astrophysics. f. sheikh)

But I think science will hit the buffers at some point. There are two reasons why this might happen. The optimistic one is that we clean up and codify certain areas (such as atomic physics) to the point that there’s no more to say. A second, more worrying possibility is that we’ll reach the limits of what our brains can grasp. There might be concepts, crucial to a full understanding of physical reality, that we aren’t aware of, any more than a monkey comprehends Darwinism or meteorology. Some insights might have to await a post-human intelligence.

Scientific knowledge is actually surprisingly ‘patchy’ – and the deepest mysteries often lie close by. Today, we can convincingly interpret measurements that reveal two black holes crashing together more than a billion light years from Earth. Meanwhile, we’ve made little progress in treating the common cold, despite great leaps forward in epidemiology. The fact that we can be confident of arcane and remote cosmic phenomena, and flummoxed by everyday things, isn’t really as paradoxical as it looks. Astronomy is far simpler than the biological and human sciences. Black holes, although they seem exotic to us, are among the uncomplicated entities in nature. They can be described exactly by simple equations.

Full Article

“Scientific Proof Is A Myth” By Ethan Siegel

(Some of us  have argued on the pages of TFUSA as if evolution is beyond any doubt. The author of this article argues that scientific proofs in support of theory of evolution and other theories are transitory and it is a matter time that these proofs turn out to be just a myth. fsheikh)

You’ve heard of our greatest scientific theories: the theory of evolution, the Big Bang theory, the theory of gravity. You’ve also heard of the concept of a proof, and the claims that certain pieces of evidence prove the validities of these theories. Fossils, genetic inheritance, and DNA prove the theory of evolution. The Hubble expansion of the Universe, the evolution of stars, galaxies, and heavy elements, and the existence of the cosmic microwave background prove the Big Bang theory. And falling objects, GPS clocks, planetary motion, and the deflection of starlight prove the theory of gravity.

Except that’s a complete lie. While they provide very strong evidence for those theories, they aren’t proof. In fact, when it comes to science, proving anything is an impossibility.

Art by Karen Teramura, UH IfA with James O’Donoghue and Luke Moore

Reality is a complicated place. All we have to guide us, from an empirical point of view, are the quantities we can measure and observe. Even at that, those quantities are only as good as the tools and equipment we use to make those observations and measurements. Distances and sizes are only as good as the measuring sticks you have access to; brightness measurements are only as good as your ability to count and quantify photons; even time itself is only known as well as the clock you have to measure its passage. No matter how good our measurements and observations are, there’s a limit to how good they are.

Full article

We think with our whole body, not just brain; and that is the missing link in Artificial Intelligence -Body.

Worth reading article by Ben Medlock who built communication machine for physicist Stephen Hawkins. He argues that our thinking process involves not just brain, but all our body cells that have evolved over centuries. The AI is missing this crucial component-thinking body cells.f sheikh 

It is tempting to think of the mind as a layer that sits on top of more primitive cognitive structures. We experience ourselves as conscious beings, after all, in a way that feels different to the rhythm of our heartbeat or the rumblings of our stomach. If the operations of the brain can be separated out and stratified, then perhaps we can construct something akin to just the top layer, and achieve human-like artificial intelligence (AI) while bypassing the messy flesh that characterises organic life.

I understand the appeal of this view, because I co-founded SwiftKey, a predictive-language software company that was bought by Microsoft. Our goal is to emulate the remarkable processes by which human beings can understand and manipulate language. We’ve made some decent progress: I was pretty proud of the elegant new communication system we built for the physicist Stephen Hawking between 2012 and 2014. But despite encouraging results, most of the time I’m reminded that we’re nowhere near achieving human-like AI. Why? Because the layered model of cognition is wrong. Most AI researchers are currently missing a central piece of the puzzle: embodiment.

Things took a wrong turn at the beginning of modern AI, back in the 1950s. Computer scientists decided to try to imitate conscious reasoning by building logical systems based on symbols. The method involves associating real-world entities with digital codes to create virtual models of the environment, which could then be projected back onto the world itself. For instance, using symbolic logic, you could instruct a machine to ‘learn’ that a cat is an animal by encoding a specific piece of knowledge using a mathematical formula such as ‘cat > is > animal’. Such formulae can be rolled up into more complex statements that allow the system to manipulate and test propositions – such as whether your average cat is as big as a horse, or likely to chase a mouse.

This method found some early success in simple contrived environments: in ‘SHRDLU’, a virtual world created by the computer scientist Terry Winograd at MIT between 1968-1970, users could talk to the computer in order to move around simple block shapes such as cones and balls. But symbolic logic proved hopelessly inadequate when faced with real-world problems, where fine-tuned symbols broke down in the face of ambiguous definitions and myriad shades of interpretation.

In later decades, as computing power grew, researchers switched to using statistics to extract patterns from massive quantities of data. These methods are often referred to as ‘machine learning’. Rather than trying to encode high-level knowledge and logical reasoning, machine learning employs a bottom-up approach in which algorithms discern relationships by repeating tasks, such as classifying the visual objects in images or transcribing recorded speech into text. Such a system might learn to identify images of cats, for example, by looking at millions of cat photos, or to make a connection between cats and mice based on the way they are referred to throughout large bodies of text.

click for full article