G+ answer: How far has artificial intelligence advanced – are any machines close to being “self-aware”?

May 07 2011

Software Ontology

 

I think there is a lot of misunderstanding of AI both in Academia within the AI community and also among the public. I discovered something about AI in the 90s that I think is relevant to this question. I was considering the ontological basis of Software, and I finally understood that it has the nature of Hyper Being, which Derrida calls DifferAnce (differing and deferring). Once that was clear, then it became clear that the nature of AI had to be Wild Being (nb. Merleau-Ponty) and actually explored by Deleuze among others. So to make this concrete we can say that if Software is the only artifact with the nature of Hyper Being (what Plato called the Third Kind of Being in the Timaeus, cf J. Sallis) then the entire purpose of Software Engineering is to contain, the unpredictability, non-representability of Hyper Being which from the beginning was recognized to be a problem and thus the Software Engineering Institute and other consortia were set up to try to come up with solutions to this problem. But if the software could not be tamed then it was driven off to the outback of AI research. And what I noticed about the various AI techniques was that they were all opaque to us, and so when they are combined they become even more intensely opaque to our understanding. This is directly opposite our own cognitive capacities which to us seem transparent, even if the actual functioning of the brain is opaque. So this means that when we say that AI systems are going to be self-aware then we are projecting our own ideas on them as to what intelligence is. It seems to me that if in us our consciousness is transparent and brain functioning that supports that is opaque, then perhaps since these machines are our inversion, then perhaps for them the unconscious, i.e. the code itself running is transparent, but the actual functioning of cognition is and will always remain  opaque. So that begs the question of how we are going to relate to opaque consciousness, or even recognize it? It is going to be something analogous to dark energy or dark matter, and our own unconscious. While the unconscious of the systems, i.e. the software running on the hardware is going to be transparent to us, because we will understand its code, and will actually be writing it, and monitoring it, etc. I think Autopoietic Systems Theory is valuable in trying to understand these kinds of systems, especially when it is combined with some kind of reflexive sociology. Anyway you can see the source of these musings at http://works.bepress.com/kent_palmer in my book Wild Software Meta-systems.

https://www.gogplus.com/Innovation/Discussion/How-far-has-artificial-intelligence-advanced-are-any-machines-close-to-being-self-aware

No responses yet

Comments are closed at this time.

Shelfari: Book reviews on your book blog