HUMAN CONSCIOUSNESS AND ARTIFICIAL INTELLIGENCE

HUMAN CONSCIOUSNESS AND ARTIFICIAL INTELLIGENCE

 

The light of the sun and moon cannot be outdistanced, yet mind reaches beyond them.

Galaxies are as infinite as grains of sand, yet mind spreads outside them.

(Myoan Eisai – A Japanese Zen Buddhist)

 

 

INTRODUCTION

Whereas human consciousness, intrinsically arising from human being’s naturally evolved brain is still a mystery; artificial intelligence, algorithmically developed and up loaded in a Silicon brain of a machine is a feat of human brain. Human being has for ages been following and is still following the way human consciousness directs, but the thinking machine is following in a way that mirrors billions of years of evolving brain and its consciousness. From ancient time to this day the phenomenon of human consciousness has intrigued many philosophers, mostly discussed for many centuries in subjective terms. But for Steven Arthur Pinker (born 1954) a Canadian-American cognitive psychologist, linguist, and popular science author and a Johnstone Family Professor in the Department of Psychology at Harvard University known for his advocacy of evolutionary psychology and computational theory of mind, “The brain, like it or not, is a machine. Scientists have come to the conclusion not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain. . . Consciousness presents us with puzzle after puzzle. How can a neural event cause consciousness happen?” (Pinker in his work How the Mind Works p 132). Thus, the mystery of human consciousness, from the time of Cartesian cognition, “I think; therefore, I am,” became an open challenge for the neuroscientists. The idea of brain as a “thinking machine,” opened a window in human mind to create human brain’s digital double, capable of transmitting artificial intelligence. Toby Walsh in his book, Machines that Think remarks, “Not without irony, Stephen Hawking (1942-2018), [an English theoretical physicist, and cosmologist] welcomed a software update for his speech synthesizer with a warning that came in the electronic voice of that technology: ‘The development of full artificial intelligence could spell the end of the human race’ (p. 8).”

Although the “cognitive revolution” has introduced pragmatic methods of studying thought and other inner experiences of our mind, neuroscience, even helped by modern technology, has not yet provided an easy way of finding an answer to the hard question of how does subjective experience of human consciousness arise from the objective activity of the human brain? How can our brain’s physical network of neurons, with all its chemical action, electromagnetic system, and interaction of billions of cells and circuits, create a mind that allows a unified awareness of our thinking, recognizing, remembering, feeling, predicting, cognizing, innumerable experiencing of our life and of the universe, repeating hundreds of millions of times in the neocortex, and finally, apparently giving birth to an instantly combined output of all inner experiences in the form of “consciousness?” While we haven’t been able to give a definitive or comprehensive delineation of human consciousness, a scientifically created and defined artificial intelligence is already around us—on screens, in our houses, and even in our pockets. One day it will be talking and walking with us as a family member.

We know that at the root of Artificial Intelligence’s technological appeal is the capability of the machine to be able to perform many tasks characteristic of human intelligence. According to Ray Kurzweil, a pioneering researcher in artificial intelligence, hybrids of biological and silicon-based intelligence will become possible and one day the contents of a human brain will be transferable into a metallic brain, as a CD-ROM uploads its software into a computer. Many thinkers, philosophers, and scientists have agreed that human consciousness is a unique human capability that arises when information is broadcast throughout the brain. But there is yet no central location in the brain identified as the seat of consciousness where—like a streaming data in the head— it can be mapped, copied and downloaded into a Silicon brain.

The pivotal question before us is still about the very nature of human consciousness: Is consciousness an input loaded into the brain through our sensory experiences, perception, memory, intelligence, and diverse media of subjectivity and objectivity that our cognitive process makes use? Or, is consciousness an extra entity that we humans have in addition to our abilities of perceiving, thinking, and feeling? Or, is it an intrinsic and inseparable part of a human being as a creature that can perceive, think, and feel? If it is an extra ingredient—as many of us think of our soul as an extra entity—then we are naturally inclined to ask, “Is it the distinctive telltale signature of a human being?” On the other hand, if we have evolved with it, then we want to know how and why only human consciousness has evolved? Further there is also an opinion that we all have three eyes—the third one inside the head, being the “pineal gland” in the human brain which has the structure of an eye. It has cells that act as light receptors, as the retina does. It has a structure comparable to the vitreous—a gel-like substance between the retina and lens of the eye similar to the shape of a lens. Scientists are researching to better understand the “pineal body”—considered in Eastern spiritualism and Western philosophy—as a possible seat of consciousness. Once scientists are able to develop an artificial pineal gland, the artificial intelligence then also be able to have an algorithmically working artificial consciousness.

  • HUMAN CONDITION AND INTELLIGENT MACHINES

 

Before we argue about the role of intelligent machines and their capability of consciousness that is the same or similar to that of humans, we need to understand more deeply about the nature of human consciousness. The Dictionary of Psychology of American Psychological Association tells, that the definition of consciousness is twofold:

The first describes consciousness as “the phenomena that humans report experiencing including mental contents that range from sensory to somatic perception to mental images, reportable ideas, inner speech, intentions to act, recalled memories, semantics, dreams, hallucinations, emotional feelings, ‘fringe’ feelings (e.g., a sense of knowledge), and aspects of cognitive and motor control.” The second part of the definition speaks of “any of various subjective states of awareness in which conscious contents can be reported—for example—altered states such as sleeping, as well as the global access function of consciousness, presenting an endless variety of the focal contents to executive control and decision making (1931).

History of man’s evolution reveals, that at a certain point of his evolution, when man transcended nature and ended his passive role of only a creature, he emancipated himself from the complete bindings of nature; first by an erect posture and second by the growth of his brain. The evolution of man may have taken billions of years; but what matters is that a patently new species to be identified as a human being arose transcending nature, recognizing life “aware of itself.” Self-awareness, reason and imagination, disrupted man’s harmony with nature that characterized his prehuman existence. Upon becoming aware of himself, the human being also realized the limitations of his existence, and his powerlessness at being a finite being. In his death he visualized his own end. But until today he is never free from this dichotomy of his existence. He cannot rid himself of his mind, even if he wants to; he cannot rid himself of his body as long as he is alive—rather his mind and body create in him a strong urge to be alive, and live an infinite life. He cannot go back to the prehuman state of his harmony with nature because he now views himself as a “special species.” He must proceed to develop his reason until he becomes the sovereign of his nature and a master of himself. But an awareness of his biological relation with the rest of animals poses a challenge to his conscious self. To assure himself that he is no more like an animal, he is tempted to demonstrate his merits as a special species through his unique physical advantage and exceptional intellectual eminence.

Human mind, an evolutionary product of his biological brain, is now changing the course of evolution by creating a digital double in his own image, equipped with artificial intelligence and emotions. Homo sapiens, from the time of their appearance on this planet, have used their neural mechanism in building tools which helped them to initiate a new form of evolution that brought about a social culture of sharing knowledge. As neurology gave birth to technology, the process of technology today has led us invariably to the creation of an amazing tool we call computer. The computer has enabled us to create an expansion of our knowledge base, permitting extensive multiple layers of links from one area of knowledge to another. Perceiving the distinctive appearance from other animals and the uniqueness of our intelligence, our power of communication, and our capability of acquiring and sharing knowledge on this planet, has given rise to a realization that humans are special creatures. But throughout our history of knowledge, scientists have mostly remained reticent to evaluate and prove with scientific reasoning our claim of being a special creature, fearing that they might not be supporting the religious doctrine of human exceptionalism of intelligent design. However, regardless of how humans got to be the way they are today, their intelligence with technology in their hands, has enabled them to overcome any biological hurdle to changing themselves in almost every aspect of their life. Hard scientific data is cumulated across vast spheres, ranging from ecology to epistemology, and cognitive psychology and consciousness, affirm that human beings are truly remarkable and are the only species we know that is achieving this. Today, by developing artificial intelligence, human beings are successfully changing the course of evolution by creating digital doubles in their own image. . . . to read full article please visit: https://independent.academia.edu/MirzaAshraf

Mirza I Ashraf – Academia.edu

independent.academia.edu

Mirza I Ashraf studies World Philosophies, Social Sciences, and International Relations.

What’s the difference between A.I., machine learning, and robotics?

Artificial intelligence is everywhere. On your screens, in your pockets and one day may even be walking to a home near you. The headlines tend to group together this vast and diverse field into one subject. Robots emerging from the labs, algorithms playing ancient games and winning, AI and its promises are becoming a part of our everyday lives. While all of these instances have some relationship to AI, this is not a monolithic field, but one that has many separate and distinct disciplines.

A lot of the times we use the term Artificial intelligence as an all-encompassing umbrella term that covers everything. That’s not exactly the case. A.I., machine learning, deep learning, and robotics are all fascinating and separate topics. They all serve as an integral piece of the greater future of our tech. Many of these categories tend to overlap and complement one another.

The broader AI field of study is an extensive place where you have a lot to study and choose from. Understanding the difference between these four areas are foundational to getting a grasp and seeing the whole picture of the field.

Artificial intelligence

At the root of AI technology is the ability for machines to be able to perform tasks characteristic of human intelligence. These types of things include planning, pattern recognizing, understanding natural language, learning and solving problems.

There are two main types of AI: general and narrow. Our current technological capabilities fall under the latter. Narrow AI exhibits a sliver of some kind of intelligence – be it reminiscent of an animal or a human. This machine’s expertise is as the name would suggest, is narrow in scope. Usually, this type of AI will only be able to do one thing extremely well, like recognize images or search through databases at lightning speed.

General intelligence would be able to perform everything equally or better than humans can. This is the goal of many AI researchers, but it is a ways down the road.

Current AI technology is responsible for a lot of amazing things. These algorithms help Amazon give you personalized recommendations and makes sure your Google searches are relevant to what you’re looking for. Mostly any technologically literate person uses this type of tech every day.

One of the main differentiators between AI and conventional programming is the fact that non-AI programs are carried out by a set of defined instructions. AI on the other hand learns without being explicitly programmed.

Here is when the confusion starts to take place. Often times – but not all the time – AI utilizes machine learning, which is a subset of the AI field. If we go a little deeper, we get deep learning, which is a way to implement machine learning from scratch.

Furthermore, when we think about robotics we tend to think that robots and AI are interchangeable terms. AI algorithms are usually only one part of a larger technological matrix of hardware, electronics and non-AI code inside of a robot.

Robot… or artificially intelligent robot?

Robotics is a branch of technology that concerns itself strictly with robots. A robot is a programmable machine that carries out a set of tasks autonomously in some way. They’re not computers nor are they strictly artificially intelligent.

Many experts cannot agree on what exactly constitutes a robot. But for our purposes, we’ll consider that it has a physical presence, is programmable and has some level of autonomy. Here are a few different examples of some robots we have today:

  • Roomba (Vacuum Cleaning Robot)

  • Automobile Assembly Line Arm

  • Surgery Robots

  • Atlas (Humanoid Robot)

Some of these robots, for example, the assembly line robot or surgery bot are explicitly programmed to do a job. They do not learn. Therefore we could not consider them artificially intelligent.

These are robots that are controlled by inbuilt AI programs. This is a recent development, as most industrial robots were only programmed to carry out repetitive tasks without thinking.  Self-learning bots with machine learning logic inside of them would be considered AI. They need this in order to perform increasingly more complex tasks.

“I’m sorry, Dave…” — Hal 9000 from Stanley Kubrick’s 2001: A Space Odyssey

What’s the difference between Artificial Intelligence and Machine Learning?

At its foundation, machine learning is a subset and way of achieving true AI. It was a term coined by Arthur Samuel in 1959, where he stated: “The ability to learn without being explicitly programmed.”

The idea is to get the algorithm to learn or be trained to do something without being specifically hardcoded with a set of particular directions. It is the machine learning that paves way for artificial intelligence.

Arthur Samuel wanted to create a computer program that could enable his computer to beat him in checkers. Rather than create a detailed and long-winding program that could do it, he thought of a different idea. The algorithm that he created gave his computer the ability to learn as it played thousands of games against itself. This has been the crux of the idea ever since. By the early 1960s, this program was able to beat champions in the game.

Over the years, machine learning developed into a number of different methods. Those being:

  1. Supervised

  2. Semi-supervised

  3. Unsupervised

  4. Reinforcement

In a supervised setting, a computer program would be given labeled data and then be asked to assign a sorting parameter to them. This could be pictures of different animals and then it would guess and learn accordingly while it trained. Semi-supervised would only label a few of the images. After that, the computer program would have to use its algorithm to figure out the unlabeled images by using its past data.

Unsupervised machine learning doesn’t involve any preliminary labeled data. It would be thrown into the database and have to sort for itself different classes of animals. It could do this based on grouping similar objects together due to how they look and then creating rules on the similarities it finds along the way.

Reinforcement learning is a little bit different than all of these subsets of machine learning. A great example would be the game of Chess. It knows a set amount of rules and bases its progress on the end result of either winning or losing.

A.I., 2001, Stephen Speilberg

Deep learning

For an even deeper subset of machine learning comes deep learning. It’s tasked with far greater types of problems than just rudimentary sorting. It works in the realm of vasts amounts of data and comes to its conclusion with absolutely no previous knowledge.

If it was to differentiate between two different animals, it would distinguish them in a different way compared to regular machine learning. First, all pictures of the animals would be scanned, pixel by pixel. Once that was completed, it would then parse through the different edges and shapes, ranking them in a differential order to determine the difference.

Deep learning tends to require much more hardware power. These machines that run this are usually housed away in large data centers. Programs that use deep learning are essentially starting from scratch.

Of all the AI disciplines, deep learning is the most promising for one day creating a generalized artificial intelligence. Some current applications that deep learning has spurned have been the many chatbots we see today. Alexa, Siri and Microsoft’s Cortana can thank their brains because of this nifty tech.

A new cohesive approach

There have been many seismic shifts in the tech world this past century. From the computing age to the internet and to the world of mobile devices. These different categories of tech will pave the way for a new future. Or as Google CEO Sundar Pichai put it quite nicely:

“Over time, the computer itself—whatever its form factor—will be an intelligent assistant helping you through your day. We will move from mobile first to an A.I. first world.”

Artificial intelligence in all of its many forms combined together will take us on our next technological leap forward. Full Artcle

posted by f.sheikh

Limits Of Science & Human Brain

(An other article on how far science can take us and limits of our brain to understand. How do we know we understand? Article is by Martin Rees, a professor of Cosmology and Astrophysics. f. sheikh)

But I think science will hit the buffers at some point. There are two reasons why this might happen. The optimistic one is that we clean up and codify certain areas (such as atomic physics) to the point that there’s no more to say. A second, more worrying possibility is that we’ll reach the limits of what our brains can grasp. There might be concepts, crucial to a full understanding of physical reality, that we aren’t aware of, any more than a monkey comprehends Darwinism or meteorology. Some insights might have to await a post-human intelligence.

Scientific knowledge is actually surprisingly ‘patchy’ – and the deepest mysteries often lie close by. Today, we can convincingly interpret measurements that reveal two black holes crashing together more than a billion light years from Earth. Meanwhile, we’ve made little progress in treating the common cold, despite great leaps forward in epidemiology. The fact that we can be confident of arcane and remote cosmic phenomena, and flummoxed by everyday things, isn’t really as paradoxical as it looks. Astronomy is far simpler than the biological and human sciences. Black holes, although they seem exotic to us, are among the uncomplicated entities in nature. They can be described exactly by simple equations.

Full Article

“Scientific Proof Is A Myth” By Ethan Siegel

(Some of us  have argued on the pages of TFUSA as if evolution is beyond any doubt. The author of this article argues that scientific proofs in support of theory of evolution and other theories are transitory and it is a matter time that these proofs turn out to be just a myth. fsheikh)

You’ve heard of our greatest scientific theories: the theory of evolution, the Big Bang theory, the theory of gravity. You’ve also heard of the concept of a proof, and the claims that certain pieces of evidence prove the validities of these theories. Fossils, genetic inheritance, and DNA prove the theory of evolution. The Hubble expansion of the Universe, the evolution of stars, galaxies, and heavy elements, and the existence of the cosmic microwave background prove the Big Bang theory. And falling objects, GPS clocks, planetary motion, and the deflection of starlight prove the theory of gravity.

Except that’s a complete lie. While they provide very strong evidence for those theories, they aren’t proof. In fact, when it comes to science, proving anything is an impossibility.

Art by Karen Teramura, UH IfA with James O’Donoghue and Luke Moore

Reality is a complicated place. All we have to guide us, from an empirical point of view, are the quantities we can measure and observe. Even at that, those quantities are only as good as the tools and equipment we use to make those observations and measurements. Distances and sizes are only as good as the measuring sticks you have access to; brightness measurements are only as good as your ability to count and quantify photons; even time itself is only known as well as the clock you have to measure its passage. No matter how good our measurements and observations are, there’s a limit to how good they are.

Full article