The death of artificial intelligence

Saint Ivuga

JF-Expert Member
Aug 21, 2008
54,377
58,384


As a graduate student of computer engineering , I recall impassioned late night debates on whether machines can ever be intelligent—intelligent, as in mimicking the cognition, common sense, and problem-solving skills of ordinary humans. Neural network research was hot and one of my professors was a star in the field. Scientists and bearded philosophers spoke of ‘humanoid robots.’ A breakthrough seemed inevitable and imminent. Still, I felt certain that Artificial Intelligence (AI) was a doomed enterprise.

I argued out of intuition, from a sense of the immersive nature of our life in the world—how much we subconsciously acquire and summon to get through life, how we arrive at meaning and significance not in isolation but through embodied living, and how contextual, fluid, and intertwined this was with our moods, desires, experiences, selective memory, physical body, and so on. How can we program all this into a machine and have it pass the unrestricted [ame="http://en.wikipedia.org/wiki/Turing_test"]Turing[/ame] care about its existence as humans do, ever behave as humans do? In hindsight, it seems fitting that I was then also drawn to Dostoevsky, Camus, and Kierkegaard.

Test? How could a machine that did not



My interlocutors countered that while extremely complex, the human brain is clearly an instance of matter, amenable to the laws of physics. Our intelligence, and everything else that informed our being in the world, had to be somehow ‘coded’ in our brain’s circuitry, including the great many symbols, rules, and associations we relied on to get through a typical day. Was there any reason why we couldn’t ‘decode’ and reproduce it in a machine some day? Couldn’t a future supercomputer mimic our entire neural circuitry and be as smart as us? They posited a reductionist and computational approach to the brain that many, including Steven Pinker and Daniel Dennett, continue to champion today. Just three months ago, Dennett declared in his sonorous voice, “We are robots made of robots made of robots made of robots.”

But despite the big advances in computing—for example, today’s supercomputers are ten million times faster than those of the early 90s—AI has fallen woefully short of its ambition and hype. Instead, we have “expert systems” that process predetermined inputs in specific domains, perform pattern matching and database lookups, and learn to adapt their outputs algorithmically. Examples include chess software, search engines, speech recognition, industrial and service robots, and traffic and weather forecasting systems. Machines have done well with tasks that we ourselves pursue, or can pursue, algorithmically, as in searching for the word “ersatz” in an essay, making cappuccino, or restacking books on a library shelf. But so much else that defines our intelligence remains well beyond machines, such as projecting our creativity and imagination to understand newnew sensory stimuli are relevant or not. Why is AI in such a braindead state? Is there any hope for it? Let’s take a closer look.

ontexts and their significance, or figuring out how and why
***


Descartes, who held that science and math would one day explain everything in nature, understood the world as a set of meaningless facts to which the mind assigned values (or functions, according to John Searle). Early AI researchers accepted Descartes’ mental representations, embraced Hobbes’ view that reasoning was calculating, Leibniz’s idea that all knowledge could be expressed as a set of primitives, and Kant’s belief that all concepts were rules.[1] At the heart of Western rationalist metaphysics—which shares a remarkable continuity with ancient Greek and Christian metaphysics—lay the Cartesian mind-body dualism that became the dominant inspiration for early AI research.


Early researchers pursued what is now known as ‘symbolic AI.’ They assumed that our brain stored discrete thoughts, ideas, and memories at discrete points, that information is “found” rather than “evoked” by humans. In other words, the brain was a repository of symbols and rules that mapped the external world into neural pulses. And so the problem boiled down to creating a gigantic knowledge base with efficient indexing, i.e., a search engine extraordinaire. They thought that a machine could be made as smart as a human by storing context-free facts and meta-rules able to reduce the search space effectively. Marvin Minsky of MIT AI lab went as far as claiming that our common sense could be produced in machines by representing ten million facts about objects and their functions.

It is one thing to feed in millions of facts and rules into a computer, another to get it to recognize their significance and relevance. The ‘frame problem,’ as this is called, eventually became insurmountable for the ‘symbolic AI’ research paradigm:

If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which might have to be updated? [1]
GOFAI — Good Old Fashioned Artificial Intelligence — as symbolic AI came to be called, soon turned into a degenerative research program. It is unsettling to think how many prominent scientists and philosophers held, and continue to hold, such naïve assumptions about how humans operate in the world. A few tried to understand what went wrong and looked for a new paradigm for AI. No longer could they ignore the withering critiques of their work by Professor Hubert Dreyfus, who drew inspiration from the radical ideas of the German philosopher Martin Heidegger (1889-1976). It began dawning on them that humans were far more complex, with their subconscious familiarity and skillful coping with the world, nonlinear decision-making, ability to assess and adapt to new situations, and the role of things like purpose, intention, and creativity that shaped, and were in turn shaped by, their meaningful organization of the world.

***



In many ways, Heidegger stood opposed to the entire edifice of Western philosophy. A hammer, he pointed out, cannot be represented by just its physical features and function, detached from its relationship to nails and the anvil, the physical experience and skill of hammering, its role in building fine furniture and comfortable houses, etc. Merely associating facts, values or function with objects cannot capture the human idea of a hammer, with its role in the meaningful organization of the world as we experience it.


Or consider music speakers. One way to represent them, in the manner of rationalists, is as objects with physical properties (shape, dimensions, color, material, attached wires, etc.), to which is then assigned a value, use, or function. But this is not how we actually experience them. We experience them as speakers, inseparable from the act of listening to music, the ambience they add to our living room, their impact on our mood, and so on. We do not understand them as context-free, object-value pairs; we understand them through our context-laden use of them. When someone asks us to describe our speakers, we have to pause and think about their physical attributes. According to Heidegger, writes Professor William Blattner:

The philosophical tradition has misunderstood human experience by imposing a subject-object schema upon it. The individual human being has traditionally been understood as a rational animal, that is, an animal with cognitive powers, in particular the power to represent the world around it … the notion that human beings are persons and that persons are centers of subjective experience has been broadly accepted … Where the tradition has gone wrong is that it has interpreted subjectivity in a specific way, by means of concepts of ‘inner’ and ‘outer,’ ‘representation’ and ‘object’ … [which] dominates modern philosophy, from Descartes through Kant through Husserl. [2]

The Western philosophical tradition, according to Heidegger, “has been focused on self-consciousness and moral accountability, in which we experience ourselves as distinct from the world and others.” Such dualism dominates modern science, but fails to describe how humans relate to the world, which is quite holistic. Heidegger contends that “we are disclosed to ourselves more fundamentally than in cognitive self-awareness or moral accountability. We are disclosed to us in so far as it matters to us who we are. Our being is an issue for us, an issue we are constantly addressing by living forward into a life that matters to us.”

In Being and Time, “Heidegger argues that meaningful human activity, language, and the artifacts and paraphernalia of our world not only make sense in terms of their concrete social and cultural contexts, but also are what they are in terms of that context.”[3]. He claimed that the subject-object model of experience, in which we see ourselves as distinct from the world and others, “does not do justice to our experience, that it forces us to describe our experience in awkward ways, and places the emphasis in our philosophical inquiries on abstract concerns and considerations remote from our everyday lives.”[4] Our being in the world is “more basic than thinking and solving problems; it is not representational at all.” When we are absorbed in work, say, using familiar pieces of equipment, “we are drawn in by affordances and respond directly to them, so that the distinction between us and our equipment—between inner and outer— vanishes.”[6]

[Heidegger] argues that our fundamental experience of the world is one of familiarity. We do not normally experience ourselves as subjects standing over against an object, but rather as at home in a world we already understand. We act in a world in which we are immersed. We are not just absorbed in the world, but our sense of identity, of who we are, cannot be disentangled from the world around us. We are what matters to us in our living; we are implicated in the world. [5]

In other words, it makes no sense to believe that our minds are built on atomic, context-free sets of facts and rules, objects and predicates, storage and processing units. No wonder the methods of natural science, which look for structural primitives such as particles and forces, fail to describe our experience of the world. Contrary to the implicit belief of western philosophy and AI research, a computational theory of the mind may be simply impossible. Isn’t our common sense “a combination of skills, practices, discriminations, etc., which are not intentional states, and so, a fortiori, do not have any representational content to be explicated in terms of elements and rules?” [7] The older Wittgenstein agreed, adding in 1948: “[N]othing seems more possible to me than that people some day will come to the definite opinion that there is no copy in the ... nervous system which corresponds to a particular thought, or a particular idea, or [a particular] memory.”

***

A conceptual advance for AI came when some researchers noted that a problem lay in the fact that a computer’s model of the world was not real. The human ‘model’ of the world was the world itself, not a static description of it. What if a robot too used the world as its model, “continually referring to its sensors rather than to an internal world model”? [6] But this approach worked only in micro-environments with a limited set of features recognized by its sensors. The robots did nothing more sophisticated than ants. As in the past, no one knew how to make the robots learn, or respond to a change in context or significance. This was the backdrop against which AI researchers began turning away from symbolic AI to simulated neural networks, with their promise of self-learning and establishing relevance. Slowly but surely, the AI community began embracing Heideggerean insights.



Machine neural networks, starting with a blank slate (unlike humans), attempt to simulate biological neurons using a connectionist approach capable of continually adapting its structure based on what it processes and learns. In symbolic AI, a feature “is either present or not. In the net, however, although certain nodes are more active when a certain feature is present in the domain, the amount of activity varies not just with the presence or absence of this feature, but is affected by the presence or absence of other features as well.” [7] Learning is guided using one of three paradigms: supervised learning in controlled domains, unsupervised learning using cost-benefit heuristics, or reinforcement learning based on optimizing certain outcomes.


But the results are not promising. Supervised learning, for instance, remains mired in very basic problems, such as the net’s inability to generalize predictably based on the categories intended by the trainer (except for toy problems that leave little room for ambiguity).

For example, a net trained to recognize palm trees in photos taken on a sunny afternoon may generalize on their shadows instead, and fail to detect any trees in photos from an overcast day. The sample size can be enlarged but the point is that the trainer doesn’t know what the net is training on, and such category errors continue until an exception shows up. Another net trained to recognize speech may keel over when it encounters a metaphor, say, “Sally is a block of ice.”
[6] Outside its training domain, the net is also unable to recognize other contexts, or to know when it is not appropriate to apply what it has learned—problems that humans dynamically solve using their social skills, biological imperatives, imagination, etc.


Reinforcement learning has its own pitfalls. For instance, what is an objective measure of immediate reinforcement? Even if we take a simplistic view that humans act to maximize “satisfaction” and assign a “satisfaction score” to all outcomes in all possible situations, we need some way to model how “satisfaction” may be impacted by our moods, desires, body aches, etc., as well as their correlation with inputs in a diversity of situations (weather, familiar faces, noise, motion, etc.). But does anyone know what, if any, ‘model rules’ humans obey in their daily behavior? Dreyfus sums it up:

“Perhaps a [simulated neural] net … If it is to learn from its own "experiences" to make associations that are human-like rather than be taught to make associations which have been specified by its trainer, it must also share our sense of appropriateness of outputs, and this means it must share our needs, desires, and emotions and have a human-like body with the same physical movements, abilities and possible injuries.” [7]

In other words, the success of neural nets depends not only on our understanding of how we breathe significance and meaning into our world and finding a way to capture it in the language of machines—these nets also need to come into a social world similar to that of humans and project themselves in time the way humans do with their physical bodies, in order to have a shot at behaving like humans. None of this is even remotely clear to anyone, nor is it clear that it is even amenable to modeling on digital computers. To insist otherwise is not only an article of faith, it also seems to me increasingly obtuse and wild. [8]
 
Bandiko kuntu kweli hili nimelidownload nikamalizie kusoma home home maana inahitaji utulivu.

Kula Thanks
 
Mbona hukuweka hizo references; labda ingekuwa vizuri kama ukaweka link ya ulikokopi hii post.

Binafsi nilitumia miaka kama mitanoi nikijishughulisha na artificially intelligent machines. Mwishoni tukawa tunaonekana kama ni bunch ya watu ambao tuna uwezo mdogo wa kutumia math katika kuelezea pehonemenon mbalimbali; baadaye nilikubaliana na hao critics. Matokeo ya kutotumia math kufanya optimizatioon kulisababisha tatizo dogo linakuwa solved kwa kutumia data nyingi sana. Unajenga knowledge database kubwa sana na kutumia complex knowledge exchange mechanism kutatua tatizo dogo ambalo kama ungetumia math sawasawa, hienda usingehitaji hata hiyo database.

Hata hivyo siamini kuwa AI inakufa, bali nina imani kuwa approach zitabadilika na kuweka effort kubwa katika symbolic math kuliko verbose approach iliyotamba zaidi miaka hiyo ya tisini.
 
Thanks for the great post.

I agree that, as nyani<abiziani> put it, “The human ‘model’ of the world was the world itself, not a static description of it. What if a robot too used the world as its model, “continually referring to its sensors rather than to an internal world model”? [6].”

I would be interested to know if this thought is your own, or is it referred to in [6]? Would you cite the reference for me?

You further write, “But this approach worked only in micro-environments with a limited set of features recognized by its sensors. The robots did nothing more sophisticated than ants.”

This is not necessarily a detriment, but a step on the ladder of evolution. In the end (after many steps in development) there could be billions and billions of sensors and responsive nodes, just as it is in animal life and humans.

The roboticists would say that they are "continually referring to sensors" as those sensors are sampled for input to the motion/position controllers.
 
Artificial Intelligence


Saturday, June 6, 2009

Japan membuat sex robots

A Japanese firm has produced a 38 cm (15 inch) tall robotic girlfriend that kisses on command, to go on sale in September for around $175, with a target market of lonely adult men.

050610_robot.jpg


Using her infrared sensors and battery power, the diminutive damsel named "EMA" puckers up for nearby human heads, entering what designers call its "love mode".

sarah_connor_chronicles_summer_glau.jpg


"Strong, tough and battle-ready are some of the words often associated with robots, but we wanted to break that stereotype and provide a robot that's sweet and interactive," said Minako Sakanoue, a spokeswoman for the maker, Sega Toys.

"She's very lovable and though she's not a human, she can act like a real girlfriend."

kokoro.jpg


EMA, with stands for Eternal Maiden Actualization, can also hand out business cards, sing and dance, with Sega hoping to sell 10,000 in the first year.

Japan, home to almost half the world's 800,000 industrial robots, envisions a $10-billion market for artificial intelligence in a decade.

i1553_TinaRobotgirl.jpg



http://ai1st.blogspot.com/2009/06/japan-membuat-sex-robots.html


 
I am convinced the bots are posting up in here, how else do they know what I like and flood me with the ads? :)

On a serious note

The problem with the criticism, is that it takes the idea of "The Singularity is Near" too literally. If you look at the timeframe of evolution to where we are, and the accumulation of information, you would see that the evolution of intelligent life was inevitable, even if at cosmological timeframes in the billions of years, and this process was pretty much random.

Today, efforts at AI are deliberate.Depending on your definition of the singularity, the most sophisticated humanlike intelligence level will not happen tomorrow or in the next ten years, but the potential is so great because the rate of advancement is defined by a geometric progression whose explosion has some very interesting repercussions for the AI world.

Ray Kurzweil talks about this a great deal in his book "The Singularity is Near".

So it is no wonder some critics are skeptical, because the beautiful ones who will propel computing are not even born yet.Most people associate the failure of neuron computing / quantum computing with the failure of AI. This need not be so. It is so chauvinistic, almost akin to looking for carbon based biped life forms in extrasolar planets, simply hubristic and short sighted.

I believe some of the most interesting avenues to AI have not even been discovered yet, but on a geological scale, the singularity is near indeed.

It is actually nearer than the return of Christ. Now you know I just had to throw that in there for no good reason, if only to irk my Christian friends.
 
0 Reactions
Reply
Back
Top Bottom