Artificial Intelligence (1) - Quareness Series 87th "Lecture".
Is the "far future" coming sooner than we think?
Imagine if you could go back in time say to 1750 and bring someone there back with you to today's world, it's quite possible that such a surprising, shocking or even mind-blowing experience would prove fatal for her/him given the massive difference we've witnessed in our human technology within such a relatively short span of time. For someone back in 1750 to do the same with a body from say 1500 would no doubt be quite a shocking experience for the latter but unlikely to be fatal because while 1500 and 1750 were very different, they were much less different than 1750 to now. Going back even further into the past from 1750, the "fatal" effect might not kick in unless you went maybe all the way back to about 12,000 BC i.e. before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. And maybe for the same effect going back further again from 12,000 BC it would need linking up with someone from 100,000 years earlier.
This pattern of our progress moving quicker and quicker as time goes on has been dubbed human history's Law of Accelerating Returns by the American author and futurist Ray Kurzweil, with more advanced societies having the ability to progress at a faster rate than less advanced ones. Here we're reminded of the 1985 movie Back to the Future where the "past" took place in 1955 and when Michael J. Fox went back in time he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang...quite a different world from 1985. However, if the movie was made today and the "past" took place in 1987/8 we would see much bigger differences with a time before widely-used personal computers, internet or cell phones. Today's teenage Marty McFly would be much more out of place in 1988 than the movie’s Marty was in 1955. Our world is more technologically advanced now with so much more change having happened in the most recent 30 years than in the prior 30...the Law of Accelerating Returns.
Kurzweil has suggested that by 2000, the rate of progress was five times faster than the average rate of progress during the whole 20th century. In addition he has stated his belief that another 20th century’s worth of progress happened between 2000 and 2014 and that a similar amount will happen by 2021 (now just 3 years away). Furthermore he believes that the same amount of progress will happen multiple times in the same year a couple of decades later. All in all he thinks (because of Law of Accelerating Returns) that the 21st century will achieve 1,000 times the progress of the 20th century. If he and those numerous scientists who agree with him are correct, the world in 2050 might be so vastly different to what we have today that we would barely recognise it...a logical enough prediction to go by our past history.
When it comes to history, we tend to think in straight lines and when we imagine the progress of the next 30 years, we're inclined to look back to the progress of the previous 30 as an indicator of how much will likely happen. It’s most intuitive for us to think linearly, when we could be better off thinking exponentially e.g. envisaging the next 30 years based on the current rate of progress. Indeed it could prove more accurate to imagine technology advancing at a much faster rate than currently. Of course history, and in particular very recent history, may seem linear when we look at a tiny segment...kinda like a small segment of a huge circle up close...and exponential growth isn’t totally smooth and uniform. Kurzweil (again) explains progress as really happening in "S-curves"...when a new paradigm sweeps the world the curve goes through three phases - (a) slow exponential growth, (b) rapid exponential growth and (c) levelling off as the particular paradigm matures. Looking back at the chunk of time between 1995 and 2007 we can detect something like his phase (b) in action with the explosion in the use of the internet, the likes of Microsoft, Google and Facebook penetrating public consciousness, the birth of social networking, and the introduction of cell phones and smart phones. The period since 2008 has been less groundbreaking on the technological front, maybe reflecting phase (c). Using just the very recent past to predict the future might be missing the bigger picture e.g. a new and huge phase (b) growth spurt could be brewing right now.
Generally we base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen”. Most of us are also limited by our imagination, which takes our experience and uses it to conjure future predictions. However, often what we know simply doesn’t give us the tools to think accurately about the future...when we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. And so while this instinct might feel correct right now, it’s probably actually wrong? Being truly logical and expecting historical patterns to continue, we might conclude that much more will change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced known species on the planet (us) keeps making larger and larger leaps forward at an ever-faster rate, at some point we’ll make a leap so great that it could completely alter life as we know it and indeed our very perception of what it means to be human...kinda like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live here on Earth. And much of what’s going on in science and technology today seems to be quietly hinting that life as we currently know it may not withstand the leap that’s coming next.
The term “singularity” has been used in physics to describe a phenomenon like an infinitely small dense black hole where the usual rules don’t apply. The American computer science/maths professor and science fiction writer Vernor Vinge wrote a famous essay in 1993..."The Coming Technological Singularity: How to Survive in the Post-Human Era"...in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own and when life as we know it will be forever changed with normal rules no longer applying. The afore-mentioned Ray Kurzweil also has given us his definition of the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly infinite pace, after which we’ll be living in a whole new world.
It would now appear that there may a number of interim technological development steps required before we'd reach these singularity times...(a) Artificial Narrow Intelligence (ANI) that specialises in one area only e.g. master level chess...(b) Artificial General Intelligence (AGI) that can perform any intellectual task as good as a human, including having the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience...(c) Artificial Superintelligence (ASI) which would range from being a little smarter overall than a human to way beyond with “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills” as the Swedish philosopher Nick Bostrom at the University of Oxford has defined it. At this point in time we can see ANI being deployed all around us and there may very soon (or perhaps even now) be no turning back for us on the AI revolution road through to AGI and on to ASI. It's unclear if humanity continues down this path whether we will survive the journey or not, but either way it would appear most everything for us will change...a somewhat ironic scenario given the widespread apparent obsession of many of today's world leaders with trying to turn the clock back and tightly control everything from the world climate to global security. (I'm very conscious here that many of our current prominent thinkers do not regard this type of super-humanoid AI evolution as likely to happen at all in the near to medium term future and that it's much more important and rational to be concerned with and proactively doing something about our "real urgent problems such as climate change, etc."...we'll see).
Now none of this is to suggest that our getting to AGI is/would be other than an incredibly difficult challenge. Many hard things for us humans (like calculus, financial market strategy or language translation) can be mind-numbingly easy for a computer, while many easy things for us (like vision, motion, movement and perception which we can do mostly "without even thinking") can be insanely hard for it. These extremely complicated abilities of ours only seem easy for us because we have many millions of years of evolutionary perfected software in our brains to deploy.
An essential first step for AGI to be a possibility would appear to be an increase in the power of computer hardware to equal that of the brain’s raw computing capacity. It's estimated that with many of our affordable computers we reached about a trillionth of human level in 1985, a billionth in 1995, a millionth in 2005, and a thousandth in 2015 putting us right on pace to get to an affordable computer by 2025 that rivals the raw computational power of the human brain. Widespread affordable AGI-caliber hardware is already almost upon us with the inevitable(?) built-in momentum to seek to develop software worthy of such (i.e. human-level intelligence).
The science world is also already working hard on reverse engineering the brain to figure out how evolution made such a radical thing with a view to unlocking all the secrets of how the brain runs so powerfully and efficiently and how we can draw inspiration from it and perhaps steal its innovations. For this purpose we're using computer architecture (artificial neural network) that mimics the brain starting out as a network of transistor “neurons” connected to each other with inputs and outputs, but knowing nothing...like a human infant brain. Its learning grows from trying to carry out tasks where its first neural firings and subsequent guesses at deciphering may be completely random but when told it got something right or wrong, the transistor connections in the firing pathways that happened to create the answer are appropriately strengthened or weakened. A lot of this testing and feedback enables the network by itself to form smart neural pathways and the machine to become optimized for the task..sorta mirroring a little of how the brain itself learns.
Of course it could happen that the scientists get desperate or impatient and try to program the machines to self-test and evaluate the results and this might prove the most promising methodology...we’d build a computer whose two major skills would be doing research on AI and coding changes into itself thereby allowing it to not only learn but to improve its own architecture. In effect we’d teach computers to be computer scientists with their main job to figure out how to make themselves smarter.
With rapid advancements in hardware and innovative experimentation in software happening simultaneously, AGI could creep up on us quickly and unexpectedly. When it comes to something like a computer that improves itself, progress might seem a long way off but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence. And even AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over us.
Today's microprocessors run at 2 GHz (i.e. 10 million times faster than our brain neurons' max of about 200 Hz). In addition the human brain is limited in size by the shape of our skulls whilst computer hardware can expand to any physical size allowing for a much larger working memory (RAM) and long-term memory (hard drive storage) with far greater capacity and precision than our own. Durability also confers a relative advantage on the computer given that its transistors are more accurate than our biological neurons and are less likely to deteriorate (and can be easily repaired or replaced if they do). Computer software too can receive updates and fixes and can be easily experimented on.
Humanity's collective intelligence is one of the main reasons we've been able to get so far ahead of other species but computers might prove to be way better at it than we are. A wide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all the others. Such a group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population. An AI, which gets to AGI by being programmed to self-improve, wouldn’t necessarily see “human-level intelligence” as some important milestone and wouldn’t have any reason to “stop” at our level. Indeed it’s perhaps likely it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence?
The somewhat scary prospect we'd face at that point is for an intelligence explosion born out of "recursive self-improvement". Once an AI system at a basic human capacity level programmed with the goal of improving its own intelligence starts to do so, it might well have an easier time making bigger and bigger leaps forward rapidly outstripping any human in smartness and the AGI would soar upwards soon reaching the super-intelligent level of an ASI system - an ultimate example of Kurzweil's Law of Accelerating Returns.
It could take decades for the first AI system to reach low-level general intelligence but if/when it finally happens, a computer would be able to understand the world around it as well as say a human four-year-old. Suddenly, maybe within an hour of hitting that milestone, the system could pump out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. And maybe about 90 minutes after that, the AI would become an ASI up to 170,000 times more intelligent than a human.
The dominance of humanity on this Earth of ours suggests that with intelligence comes power, and this means that any ASI created by us would likely be the most powerful "being" in the history of life here, utterly controlling all living things including us. Given what our own meager brains have been able to invent, something so many multiple times smarter than us might have no problem controlling everything and anything in what would seem to us a sorta god-like manner. "Good" stuff like creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth might all suddenly be possible. However, also possible with the birth of such a new omnipotent entity might be the early end of all biological life on Earth, and the most important/
urgent question facing humanity at that point of singularity would be whether or not this new "god" is a benevolent one.
To be continued...
Sean.
Dean of Quareness.
February, 2018.