I attended a conversation between Andrea Ackerman (digital artist) and Ellen Ullman (author, "The Bug" and "Close to the Machine : Technophilia and Its Discontents") where the conversation was supposed to be about issues arising from the ongoing merger of humans and cybernetics. it was organized by Marcia Tanner, the curator of SJMA's Brides of Frankenstein exhibit (worth a look if you're here in SJ?
It was an interesting conversation, but was rapidly redirected into more of a discussion about artificial life and what makes things alive. Also an interesting topic.
However, a couple of things said by the conversants prompted me to think and ask some questions here. One of the objections Ms. Ullman had to the current state of Ai research was that most AI research has been directoed toward reasoning, ratiocination. Not enough, in her opinion, has been devoted to artificial emotions, to modelling emotional intelligence.
I tend to agree with this, although not with Ms. Ullman's generally pessimistic view of computing. I think that very few researchers are looking into modelling emotions and emotional intelligence.
I was also reminded of what I saw at IJCAI this year and at UBICOMP in Tokyo, this month. There is a lot of very successful research going on in vision processing, a variety of reasoning methods from case based to ontological. And let me hasten to say that, after about 30 years of research and development into AI, I am still a proponent of its possibilities.
But, what has emerged in my thinking in the past two years is a realization that these techniques, so successful in operation, are not intelligence. They might, for want of a better word, be described as artifiicial smarts.
I have been summing this up by noting that I do not solve differential equations when I catch a baseball (or navigate a room, or recognize a face, or any number of other things). Whatever I do might be modelled by a computer solving differential equations (or quadratics, or mapping features into a hyperplane) but it isn't what I do.
A researcher at University of Massachusetts said to me that he is looking for the "ghostly signature" (his term) in the data structures we use to perform these so-called AI tasks now. he has the sense that there is something else in there, a pattern, a property, something, that will unlock the door and take us from these brute force methods we use today (which work amazingly well) to something that is more akin to the processes we, as humans, actually use to accomplish those takss we call intelligent.
I find it interseting that there are many artists emerging now who wish to explore this idea as well. It will be more of an issue, perhaps, as we augment or replace more and more of our physical selves with digital components. We'll not only have to ask what makes us human, but also whether these additions to our humanity are doing things in the best or most appropriate way.
Comments?
Bill
No comments:
Post a Comment