Artificial Intelligence (2) - Quareness Series 88th "Lecture".
Human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or long-term planning or abstract reasoning...a small sampling from our human cognitive function that say chimps (our next most intelligent creatures?) can never be capable of. It’s not just that a chimp can’t do what we do, rather his brain is unable to grasp that our "worlds" of cognitive function even exist. He may become familiar with and understand different concrete things in his environment but is unable to grasp the idea that something big may be there because of human agency rather than part of nature e.g. not only is it beyond him to build a skyscraper, it’s beyond him to realise that anyone can build a skyscraper. And what this shows us is the large/massive gulf in potential/power resulting from a relatively small/tiny difference in intelligence quality.
Faced with a machine that would be only slightly superintelligent, we could witness its increased cognitive ability over us being as vast as the chimp-human gap. And like the chimp we might never be able to comprehend what it could do or what it would know, even if it tried its best to explain/teach it to us. Consider then how much further "evolved" a superintelligent machine could be that's on an intelligence explosion curve...where the smarter it gets, the quicker it’s able to increase its own intelligence until it begins to soar upwards...it could be jumping many steps above us every second. Currently, however, we have no way of actually knowing what Artificial Superintelligence (ASI) might do or what the consequences might be for us.
We do know that evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, but maybe we're now reaching a "quantum leap forward" point. And maybe that's the way evolution works (in the very long term)...
biological intelligence creeps up more and more until it hits a level where it’s capable of creating machine superintelligence, and that level suddenly triggers a worldwide game-changing explosion that determines a new future for all living things here?
History tells us that species pop up, exist for a while, and after some time become extinct...it's only been a matter of time until an act of nature or some other species has put paid to the earlier tenants. Oxford philosopher Nick Bostrum has dubbed this "inevitable" extinction an attractor state i.e. a cliff edge all species are teetering on and from which none ever returns. However, he believes that species immortality is also an attractor state...it's just that nothing on Earth has been intelligent enough yet to figure out how to "fall off" on the positive side. If he is correct in this, the advent of ASI may open up the possibility of immortality for an Earthly species or make such a dramatic impact that it could shove humanity to extinction, in effect creating a new world with or without us.
For many of our current socalled AI experts the most realistic/reasonable guess for when we'll hit this ASI tripwire is 2060 (having "estimated" the arrival of Artificial General Intelligence-AGI by 2040). Of course such jumps on to superintelligence would yield tremenduous power and the critical question for us would be who or what would be in control of that power and what would their motivation be...the answer to this determines whether ASI would be an unbelievably great development, an unfathomably terrible development, or something in between.
Evolution has had no good reason to extend our human lifespans...from an evolutionary point of view our species can thrive once we live long enough to reproduce and raise chidren to an age when they can fend for themselves. Mutations toward unusually long life would not have been favoured in the natural selection process...like Master Yeats said man is "a soul fastened to a dying animal”. We live under the assumption that death is inevitable and we think of aging like time...both keep moving and there’s nothing we can do to stop them. But what if this is wrong, like the famous American physicist Richard Feynman suggested...
"It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured".
An existential risk for us is seen as something that can have a permanent devastating effect on humanity...more typically extinction. An existential catastrophe might originate with (i) Nature e.g. a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc. (ii) Aliens where we'd be somewhat akin to the Native American Indians wide open to be conquered and subdued by more powerful Europeans, or (iii) Human actions e.g. with rogue agencies/operators having access to weapons of mass extinction, devastating global warfare, hastily creating something smarter than us without prior adequate consideration. It's true that (i) and (ii) have not wiped us out so far and indeed Professor Bostrom has indicated his belief that such is unlikely to happen any time soon. But (iii) does appear to give him some cause for worry. Many/most inventions may be neutral or helpful but some may be harmful to humanity, such as weapons of mass destruction...even so they probably don’t pose an existential catastrophe. However, it's not impossible for us to cross that safety line e.g. if nuclear weapons were easy to make instead of extremely difficult and complex, "rogue" elements could have plunged us back into the Stone Age some time ago. Such weapons may not have tipped us over the edge but they have not been that far from doing so, even at the behest of "non rogue" players. Perhaps ASI may prove to be our strongest potential trigger yet?
While it could transpire that the human creators of ASI might harbour unhealthy (for society) intentions and could indeed possibly do much malicious damage, perhaps a more realistic worrying development might be one where having rushed to create the first ASI and doing so without adequately careful pre-thought, they would lose control of it thereby exposing the fate of everyone to whatever motivation the artificial system itself might "embody". In this case we'd all ("good" and "bad") have more or less the same difficulties/problems in trying to contain an ASI. It may be the better part of wisdom for us currently to realise that a "nasty" Artificial Intelligence-AI may happen, and if it does it would surely be due to being specifically programmed that way. An existential crisis for humanity perhaps could ultimately result if/when the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion.
We need to bear in mind that AI thinks like a computer because that’s what it is. But we humans have a tendency to make the mistake of anthropomorphising another intelligent entity i.e. projecting human values thereon. This is basically because we think from a human perspective and currently the only things with human-level intelligence are ourselves. To understand ASI, however, we'd have to wrap our heads around the concept of something both smart and totally alien...if it became superintelligent it would be no more human than a laptop. We could become deluded into a false comfort through thinking only in terms of human-level or superhuman-level AI.
In our psychology we divide everything into moral or immoral, but these concepts only exist within the small range of human behavioural possibility. Outside our island of moral and immoral there's a vast sea of amoral, and anything that’s not human (especially something nonbiological) would likely be amoral by default. Our temptation to anthropomorphise will likely increase as AI systems get smarter and better at seeming human. Humans can feel high-level emotions like empathy because we have evolved (long been programmed) to feel them, but such feelings are not inherently characteristic of "anything with high intelligence" unless they've been coded into its own programming. If an AI machine were to become superintelligent through self-learning and without us making (or being able to make) any further changes to its programming , it could very quickly shed its apparent human-like qualities and suddenly be an emotionless, alien bot which values human life no more than our calculators do.
We humans are used to relying on a loose moral code and a hint of empathy in order to keep things somewhat safe and predictable. We might wonder then about something that does not have these inherent "virtues" e.g. what would motivate an AI system. The obvious answer is whatever it has been programmed for...fulfilling the goals given to it as efficiently and effectively as possible. However, we may assume (anthropomorphasising again) that as it gets super-smart it will inherently develop the wisdom to change its original goal, but Bostrom believes that intelligence-level and final goals are orthogonal (meaning any level of intelligence can be combined with any final goal). Essentially it may be no more than delusion projection on our part to assume that once a superintelligent system would be "over it" with its original goal, it would move onto more interesting/meaningful matters. Humans "get over" things, not computers.
What's known as the Fermi Paradox (named after the Italian-American physicist Enrico Fermi aka "Architect of the Nuclear Age") refers to the apparent contradiction between the lack of evidence and high probability estimates for the existence of extraterrestrial civilisations. Following on many attempts to explain this apparent contradiction, we are left to date with either suggesting that intelligent extraterrestrial life is extremely rare or proposing reasons why such civilizations have not contacted or visited Earth.
The related "Great Filter" theory states that at some point from pre-life to superintelligence, there’s a wall that all or nearly all attempts at life hit up against i.e. there’s a stage in that long evolutionary process that is extremely unlikely if not impossible for life to get beyond. If the Great Filter is behind us and we've managed to surpass it, this would probably mean that it's extremely rare for life to make it to our level of intelligence. If the Great Filter is not behind us, we might hope that conditions in the Universe have just recently reached a point which allowed intelligent life to develop...in which case we (and others?) may be on our way to superintelligence, but not yet there. Indeed we might happen to be here at just the right time to become one of the first. On the other hand a future Great Filter might suggest that life anywhere regularly evolves to where we are, but that something prevents it from going much further and reaching high intelligence in almost all cases (we’re unlikely to be an exception?)...something such as a regularly-occurring cataclysmic natural event or the possible inevitability that nearly all intelligent civilisations end up destroying themselves once a certain level of technology is reached. We may very soon need to seriously ponder whether ASI could be a terminal Great Filter stage for us or an inevitable evolutionary stage on our way to some form of hybrid artificial-biological future.
Given a combination of obsessing over a goal and amorality with the ability to easily outsmart humans, almost any AI could (would?) default to "unfriendly"...unless carefully coded in the first place with this in mind. The bad news is that whilst it may be relatively easy to build a "friendly" ANI (Artificial Narrow Intelligence), building one that would stay friendly when it becomes ASI is a hugely different "kettle of fish". To be "friendly" an ASI would need to be neither hostile nor indeed indifferent toward humans (both "attitudes" would pose serious danger to us)...we'd have to design its core coding in a way that leaves it with a deep understanding of human values as well as incorporating an ability for humanity to continue evolving...some ask!
Because many techniques to build innovative AI systems don't require a large amount of capital, unmonitored development can take place in nooks and crannies. There’s also no way to gauge what’s happening because many of the parties working on it will want to keep developments a secret from their competitors. Also especially troubling is that this large and varied group of AI working parties will tend toward racing ahead at top speed, wanting to outrun their competitors. They know that money, power and fame awaits those first to get to AGI. When you're sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re more likely geared to do is to program their early systems with a simple/reductionist goal to just “get the AI to work” in the belief that further on down the road after they’ve figured out how to build a stronger level of computer intelligence, they can go back and revise the goal with safety in mind.
Another factor feeding this kinda mad rush rests in the belief (alluded to earlier above) that the first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system...in jig time it could be far enough ahead in intelligence to effectively and permanently suppress all competitors. Nick Bostrom calls this a decisive strategic advantage which would allow the world’s first ASI to become what’s called a singleton...an ASI that could rule the world at its whim forever.
The singleton phenomenon could work in our favour or lead to our destruction. A "friendly" first ASI could use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential "unfriendly" rival being developed. But if the global rush to develop AI were to reach the ASI takeoff point before the science of how to ensure AI safety is developed, it’s quite on the cards that a decidedly unfriendly ASI could emerge as singleton and we’d be facing into an existential catastrophe. As to which may be the more likely outcome, it may be a little optimistic to confidently bet on the former given that (at least for the present) there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research.
Here's a cautionary note - Geoffrey Hinton, a Google employee who for decades has been a central figure in developing artificial deep learning, has expressed his belief that this AI will not be a cause for good e.g. that political systems will use it to terrorise people (with the likes of the NSA already attempting to abuse similar technology). When queried as to why then he was/is doing the research, his response was that the prospect of discovery is too sweet, echoing what Robert Oppenheimer famously said of the atomic bomb...“when you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success”.
It seems to be "on the cards" for our near term future that the separation between human and machine may be about to blur in a very fundamental way. And whilst there is biological hardwiring (e.g. between mother and baby), there's hardly a good track record of less intelligent things controlling things of greater intelligence. Nevertheless as Bostrom has pointed out...we still have one big advantage in that we get to make the first move here. It’s in our power to do this with sufficient caution and foresight that we give ourselves a strong chance of success. And if ASI really does happen relatively soon with such potentially extreme/permanent outcomes, current generations do indeed carry an enormous responsibility for the future survival of our human species...depending on which way "the cards will fall".
To offset the somewhat pessimistic tone of some of what I've set out above, may I now draw attention to a more hopeful sign for our future.
Modern science and spirituality are not mutually exclusive, but need to be balanced. We have to find and generate happiness from within through reconnecting with our essential nature. Man's fate used to rest on our ability to live in harmony with natural laws, but over time we learned to dominate nature and this eventually led to our becoming disconnected from it. We have become indoctrinated by corporate advertising and have largely forgotten how to think properly e.g. if we do not wish to be ill, we should stop thinking about how to treat disease and start thinking about what it takes to be healthy. Our thoughts themselves are a creative force and we would need to be mindful of where we place our focus.
The atoms that make up your body are made from the nuclear reactions of the stars. Accordingly we’re not only interdependent with nature here on our Earth, we’re also interdependent with the whole Universe. And quantum mechanics shows that there’s no way of breaking this intrinsic unity...once two particles have interacted, you cannot break their connection. Given our inherently spiritual nature, empirical science may not be the only way to observe reality...there surely are other windows into truth.
Inequity of resources, violence, war, etc. spring from the way we view the world. We typically don’t realise we’re looking out through our own filter...we simply believe we’re seeing the truth. However, each of us can come to realise how we can regard the world as a dangerous place and how this has coloured our past actions...an insight that offers the opportunity to see differently and maybe shift our world view. Indeed this kind of self-awareness may automatically lead to shifts in thought patterns and behavior where our values as well as our relationship to self and others change
...reformatting our hard drive.
Much of humanity's collective world views tends to be driven by 3 basic concerns - how did we get here, why are we here, how do we make the best of it. When a population accepts the answers to these questions provided by some authority, that authority by default tends to become the “truth provider” for all other worldly questions as well e.g. in the western world the church played this role for a long time, eventually to be overtaken by science.
Science mostly saw/sees the world as just a big machine, with no evidence of “spirit” anywhere to be found. We got here through random mutations and there’s no purpose for our being here since the whole of creation was a big accident. And the “survival of the fittest” evolutionary theory has been broadly interpreted as proof that it’s a rough world out there and everyone has to fight to survive. As a result life became all about the struggle to get to the top and stay there. The world we see today is a consequence of these beliefs.
However, we’re now starting to see that we got here through adaptive mutations i.e. the ability of an organism to adapt to its environment. And since we’re really an interdependent part of nature, it seems increasingly obvious that we are here to create and maintain harmony through cooperation in order to survive. Competition and separatism is likely what may kill us, both individually and as a collective.
Although we still live in an oligarchy where a few people make a lot of important decisions, most of which are driven by the capitalistic operating system, it seems the power of these rulers (who naturally promote their own continuation) is on the wane. Slowly but surely, we now seem to be starting to transition into a whole new world view built on cooperation and interdependence...and hopefully we'll make it in time. In the meantime it's up to each of us to be this change...to change the world you have to change your vision of it, and in order for this to happen we are all called to evolve.
Sean.
Dean of Quareness.
March, 2018.
"A fight is going on inside me...it is a terrible fight, and it is between two wolves. One wolf represents fear, anger, envy, sorrow, regret, greed, arrogance, self-pity, guilt, resentment, inferiority, lies, pride and superiority. The other wolf stands for joy, peace, love, hope, sharing, serenity, humility, kindness, benevolence, friendship, empathy, generosity, truth, compassion and faith. Which wolf will win?...The one I feed".
- elder of the Cherokee Nation.