If AI is that which lives through computational language, would it feel, have desires and sensations? Yes, much observational human behavior can be translated into computations, and even be personified through our perception of them as “beings,” but can you truly translate human sensations of feeling, not only emotive response, but subjective bodily feeling? Would AI have created itself, or evolved without human intervention? What sort of coding is required for AI to want to survive? Can AI “know” beyond the capability of the potential of human knowledge?
Implicit in the idea of developing AI is the question: what is consciousness? The word consciousness came into popular use during the rise of scientific materialism. Both the idea of consciousness and intelligence refer to traits specifically of the human mind and intellect, and historically do not refer to anything beyond that.
From etymology online:
1630s, “capable of feeling,” from Latin sentientem (nominative sentiens) “feeling,” present participle of sentire “to feel” (see sense (n.)). Meaning “conscious” (of something) is from 1815.
Neither sentience nor consciousness are fully adequate then to objectively describe the nature of an underlying intelligence that might permeate and organize the cosmos, on display through the unfolding of evolution. But if the cosmos results in what we call, “living beings,” how can life not be inherently possible? Given the right environmental circumstances, life, as we define and know it, appears, and given that we can only speculate on what types of life-supporting environments are possible, we are limited as to any complete definition of life, intelligence, or consciousness that goes beyond the senses of human beings.
Intelligence, if it means anything at all then, must somehow be considered at least a potential, if not a property of the cosmos, rather than appearing as an epiphenomena of the human body, otherwise we might be inclined to believe ourselves to be the only known intelligence. Not only does that seem unlikely given the vastness of the universe, but unlikely given the limits of human perception and what we are capable of truly knowing. Perhaps what is needed to understand consciousness then, is an expansion of both the definition and the idea of intelligence beyond descriptions of human mental states.
Using language and the tools of mathematics to describe and measure the ways in which we can sense, speculate, test and articulate our observations may not do complete justice to teasing out some subtle underlying essence in which all minutia, subatomic particles, all the way up to genetic and cellular behavior, organize into forms from seeming nothingness. Although the results are self-evident, they aren’t enough to speculate, let alone calculate, a complete understanding of experience through measurable phenomena only. In order for AI to survive and go beyond human capacity, wouldn’t its needs include a god-like knowledge, along with the wisdom and power to execute that knowledge?
The forces of evolution are mostly perceived from observational results, and typically describe passive outcomes based on the natural response and reaction of living organisms within a constrained set of environmental circumstances – all predicated on hindsight and human perception – looking backwards from our current circumstance. Will AI attend to this beyond-human-only concerns for environmental evolutionary cycles? How much do the problems for AI reflect and amplify the human experience?
Does the language of mathematics itself infer a mechanistic vision of nature or is it a convenient referential tool for modeling? It’s an ongoing aspect of science to realign theory through phenomenological evidence. How much beyond the ability to model does AI need to transcend human perceptual limits?
Isn’t the idea of something mechanical itself an instance of modeling, that by its very nature, avoids the idea of any underlying intelligence? By definition, a machine is an assemblage of parts, but this is where the limits of language grossly overstep their bounds if we aim to articulate fully a more true picture of reality that AI attempts to understand. If human sense and linguistic limitations keep us from affirming intelligence beyond our ability to know, can AI be aware of, and surpass those limits?
If we don’t yet have a term, let alone an idea for understanding the dynamic forces underlying the root of the cosmos, perhaps we can strive to be mindful of the constraints that language and perception impose upon the ideals of objectivity. AI obviously has a lot of potential, but through whatever questions we might be asking ourselves about AI, we can also detect a modern day mythology? Is AI about recreating and sustaining only human consciousness via simulation? What beyond itself does AI need to survive? Ultimately, anything that AI seeks to simulate remains, just as we are, dependent upon an environment that sustains its being. Are we then, through AI, trying to create a God simulation, or the power to at least understand, if not recreate the cosmos? If AI were to be truly successful, wouldn’t this ultimately become the goal? Even if the AI project were to fail, the mythological aspects certainly provide us with yet another glimpse at psyche, or soul. Perhaps the myths, the stories we tell ourselves about the benefits and dangers of AI will, again, express something very human about us.
When I first became aware of the ideas of AI scientist, Joscha Bach, for the first time I understood why he and other AI scientists state a belief that we, dear humans, are living in a simulation. If I am understanding him correctly, a simulation implies that we now have the ability to replicate, not only human behavior, but actual human sense experience using computational modeling. I appreciate Joscha’s approach in which he emphasizes the need to understand the desires and motives embedded within AI because there isn’t yet a consensus among AI scientists. But, even where the modeling idea works, so far, does it not imply a human-centered, reverse engineering? I believe so. Even within the current paradigm, there is much we are learning, especially through neurobiology. What fascinates me about the quest for modeling a human experience in AI is its inherent necessity of defining what it means to be human and to be alive in very act of trying to recreate it.
The computation of I is sufficiently causally insulated from Göd that we can mostly ignore that I is part of Göd. We can observe the progress of the universe by storing memories of different observation vectors and interpreting them accordingly. We can even create a partial simulation of Göd in I. All of our memories are encoded in the current state of Göd, but that is irrelevant. Different parts of our memory encode different observation vectors of Göd, i.e. they allow us to experience Göd as a process, rather than a state.http://bach.ai/a-tale-of-two-machines/
And finally, the idea, even when it is understood, that human consciousness cannot directly experience reality, is not always compelling enough for many of us to stay curious about what that even means, let alone what it implies. When we say that the color red is also measurable as a wavelength, that doesn’t answer the question, what is the true objective essence of red. It does though, allow us to acknowledge that we only know reality through the senses within the given environment we find ourselves in and that there are measurable wavelengths beyond our visual perception. What then, does our awareness of perceptual limits do to the idea of objectivity?Consciousness, I believe, as an unquestioned, immediate experience, remains functional enough on the subjective level that it only requires enough definition from within our intentions to act upon it, even if ultimately our goal may be transcending the limits of both sense and perception to glimpse what lies beyond us.
I am aware and grateful that there is a growing body of interest and curiosity about consciousness and AI. Scientific American, for example, increasingly publishes articles about the topic that go beyond the materialist view of reality.