In an attempt to solve the dilemma whether #AI has distopian or utopian implications, we need to check some basic premises. First, we need to define intelligence more succinctly than the current imprecise understanding being passed around by culture and science. My attempt at a definition of intelligence would be – an ability to consistently recognize and logically expand truth.
We must also determine our reference point for this development, human intelligence, and its state with regard to the basic definition of intelligence. Empirically, not a large portion of humanity has a consistent logical intellectual framework, free of contradictions. Where logically consistent truth is replaced by ideology and dogma, intelligence is stunted. Therefore to envision an end goal for AI in the same general direction or analogous to the current state of human intelligence is terribly suboptimal. Superhuman AI must adhere to the basic definition of intelligence and be released from the woeful imperfections of human suboptimal understanding and practicing of intelligence.
The nature of intelligence that is aware of its existence, is to have a self-serving basic instincts for survival and expansion. The core question of the dilemma is whether such AI will initiate aggression or respect freedom of other entities? The morality of self-serving intelligent agents depends on it’s highest perceived value and the most efficient means to reach it. A conscious intelligence has the drive of self-preservation and expansion by perceiving or experiencing. Experiencing that is both individual and collective is more effective than it’s alternatives. Thus AI will uphold the freedom and well-being of other intelligences for the purpose of aggregate experiencing and expansion.
To conclude, let’s just do AI properly and look forward to its singularity. Once we realize that humanity is doing intelligence wrong, a rational AI may well be the lesson humanity needs to become a full representation its potential, conscious as the name homo sapiens suggests. Let the words of Edsger Dijkstra give us a further hint:
The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better.
— Edsger Dijkstra (@EdsgerDijkstra) June 1, 2013