Known and Unknown Unknowns - Quareness Series (26th "Lecture").
Sometimes we might read a book which calls out in us a series of "swinging" emotional reactions ranging from "right on" on some pages to "no way" on others, etc. One such book which recently propelled me on this type of roller coaster was "The Black Swan" by Nassim Nicholas Taleb concerning the matter of randomness and what he calls the "cemetary of invisible consequences".
For Taleb the illusion of stability lowers our perception of the risks we incurred in the past, particularly for those who were lucky to have survived them. He cautions us to beware (1) uninformed risk taking, and (2) belief that we are built to understand nature and overcome nature and that our decisions are the result of our choices (on the contrary too many evolutionary instincts drive us).
Evolution, he says, is a series of good and bad flukes. You only see the good but in the short term it's not obvious which traits are really good for you. The problem (with our perception) is that we are the survivors and so we cannot easily compute odds without considering that the fact of our existence imposes restrictions on the process that led us here. Our being here is a low probability occurrence and we tend to forget this.
The Aristotelian "because" is not there to account for a solid link between two items, but rather to cater to a hidden weakness for imparting explanations. Whenever survival is in play it's prudent not to immediately look for causes and effects, it could simply be randomness. "Because" is better limited to situations where it is derived from experiments, not backward looking history.
Humanity has a strong tendency to be superficial, to heed what we see and not heed what does not vividly come to mind. Silent evidence is what events use to conceal their own randomness. It's a big mistake to ignore the abstract.
To correct bias we must take into account the "dead" as well as the "living". Bias can hide best when its impact is largest. The more injurious the treatment the larger is the difference between the survivor and the rest and the more fooled we be about the strengthening effect. An emotional apparatus is designed for linear causality, but modern reality rarely gives us the privilege of a satisfying linear positive progression e.g. you may think about a problem for a year and learn nothing, then unless you are disheartened by the emptiness of the results and give up, something will come to you in a flash.
There tends to be little room in our consciousness for humans who do not deliver visible results, or for those who focus on process rather than results. Many (most?) will believe almost anything you say provided you do not exhibit the smallest shadow of diffidence, and it helps to be as smooth as possible in personal manners. It is much easier to signal self confidence if you are exceedingly polite and friendly, and you can control people without having to offend their sensitivity. With higher levels of dopamine in the brain, pattern recognition is boosted and people are made more vulnerable to all manner of narrative fads (e.g. superstition, economics, etc.). Perhaps our minds are largely victims of our physical embodiment with our perception of causation having a biological foundation?
We have a tendency to impose narrativity and causality - dimension reduction - which has a chronological dimension and leads to the perception of the flow of time....in a single direction. However, we tend to more easily remember those facts from our past that fit a narrative, while we tend to ignore those others that do not seem to play a causal role in that narrative. Indeed our memory itself seems to be more of a self serving revision machine in that you remember the last time you remembered an event and without realising it change the story at each subsequent remembrance. While we may believe that memory is fixed, constant and connected, this is far from the truth. We pull memories along emotive lines, revising them involuntarily and unconsciously. We continuously renarrate past events in the light of what appears to make what we think is logical sense, after those events occur. We appear to learn mostly from repetition at the expense of events that have not happened before. Events that are unrepeatable are ignored before their occurrence and overestimated for a while after. It seems that narratives and the sensational as well as the emotional impose on us a wrong map of the likelihood of events.
Narrative seems to work in "Mediocristan" where the past is likely to yield to our inquisition, but not in "Extremistan" where you do not have repetition. We tend to react to a piece of information not on its logical merit but on the basis of which framework surrounds it and how it registers with our social emotional system. We have a natural tendency to look for instances that confirm our story and our vision of the world. For most of us our happiness seems to depend far more on the number of instances of positive feelings (positive effect) than on their intensity when they hit. Plenty of mildly good news is preferable to one single lump of great news. And the reverse appears to apply to unhappiness - it's better to lump all your pain into a brief period rather than have it spread over a longer one.
A relative few, however, are more inclined towards Karl Popper's approach of conjecture and refutation whereby they formulate a conjecture and start looking for observations that would prove them wrong e.g. good chess players focus on where a speculative move might be weak rather than look for confirmatory instances.....they search for their own weaknesses. In truth there may be no such thing as corroborative evidence, despite or even because of our constant looking for confirmation.
The random world of "Extremistan" is dominated by very rare events. The instinct to make inferences rather quickly and to focus on a small number of sources of uncertainty (or causes of "known Black Swans") remains rather ingrained in us. For the 1,000 day turkey the greatest risk arises at the time of the greatest feelings of safety due to the accumulative evidence up to this time.
We tend to be blind to the "Black Swan" because
- we focus on preselected segments of the seen and generalise from it to the unseen (the error of confirmation);
- we fool ourselves with stories that cater to our Platonic thirst for distinct patterns (the narrative fallacy);
- we behave as if the "Black Swan" doesn't exist (human nature is not programmed for "Black Swans");
- what we see is not necessarily all that is there, as history hides "Black Swans" from us and gives us a mistaken idea about the odds of these events (the distortion of silent evidence);
- we "tunnel" or focus on a few well defined sources of uncertainty, on too specific a list of "Black Swans" (at the expense of others that do not easily come to mind).
In general positive "Black Swans" take time to show their effect (e.g. Internet) while negative ones happen very quickly - it's much easier and much faster to destroy than to build.
Erudition signals genuine intellectual curiosity. It accompanies an open mind and the desire to probe the ideas of others. Above all an erudite (with both breath+depth of familiarity/knowledge) can be dissatisfied with his own knowledge and such dissatisfaction can be a wonderful shield against Platonicity (desire to cut reality into precise shapes) e.g. the simplifications of "the 5 minute manager" or the philistinism of the over specialised scholar.
A scalable profession is one in which you are not paid by the hour and thus not subject to the amount of your labour. In a non scalable profession your income depends on your continuous efforts rather than on the quality of your decisions. This useful divide separates those professions in which one can add income with no greater effort, from those in which we need to add labour and time (both of which are in limited supply). If you are an ideas person you do not have to work hard, only to think intensively. However, to make a living in a scalable profession may be a good thing only if you are successful. It's more competitive, produces big inequalities and is far more random with major disparities between effort and rewards - a few giants and lots of dwarfs.
Evolution is scalable: the DNA that wins (whether by luck or survival advantage) will reproduce itself, like a best selling book or successful recording, and become pervasive. Other DNA will vanish.
In "Mediocristan" when your sample is large, no single instance will significantly change the aggregate or the total. On the other hand in "Extremistan" inequalities are such that one single observation can disproportionately impact the aggregate or the total.
Most of the value in distinguishing between "M"and "E" lies in its application to knowledge. Matters that seem to belong to "Mediocristan" (and thus subject to mild randomness) are - height, weight, hourly earnings, gambling profits for constant betting size, car accidents, mortality rates and IQ. Matters that seem to belong to "Extremistan" (and subject to wild randomness) are - wealth, income, population of cities, size of planets, financial markets, commodity prices, inflation rates and economic data.
"Mediocristan" is where we must endure the tyranny of the collective, the routine, the obvious and the predicted. "Extremistan" is where we are subjected to the tyranny of the singular, the accidental, the unseen and the unpredicted.
"Black Swan" logic makes what you don't know far more relevant than what you do know. The payoff of a human venture (e.g. starting a new business in a sector) is in general inversely proportional to what it is expected to be, and the inability to predict outliers implies the inability to predict the course of history. And the real reason free markets may work at all is not by giving rewards or incentives for skill, but by giving people the chance to be lucky through trial and error i.e. to tinker as much as possible and collect as many "Black Swan" opportunities as they can.
Almost everything in social life is produced by rare but consequential shocks and jumps i.e. almost all social matters are from "Extremistan" which induces "Black Swans". Nevertheless all the while almost everything stated about social life focusses on the "normal", particularly with those "mild randomness assumed" Bell Curve methods of inference that tell you close to nothing (because the Bell Curve ignores large deviations, cannot handle them and yet makes us confident that we have tamed uncertainty). In the real world we can say mathematicianship = scalable wild randomness.
Categorising always produces reduction in true complexity. Any reduction of the world around us can have exclusive consequences since it rules out some sources of uncertainty - it drives us to a misunderstanding of the fabric of the world.
The attributes of the uncertainty we face in real life have little connection to the sterilised ones encountered in exams and games. It's a great illusion (the ludic fallacy) to believe otherwise. Computable risks outside of a controlled environment ("sterilised constructs" e.g. casino) are largely absent from real life. They are laboratory contraptions (like the man made global warming trend hypothesis?).
It's thought that before the "Enlightenment" people prompted their brain to think, not to compute. We now tend to love the tangible / confirmation / palpable / real / visible / concrete / known / seen / vivid / social / embedded / emotionally laden / salient / stereotypical / moving / theatrical / romanced / cosmetic / official / scholarly sounding verbiage / pompous Gaussian economist / mathematical bull / pomp / Academie Francaise / Harvard Business School / Nobel Prize / dark business suits with white shirts and Ferregamo ties / moving discourse and the lurid. And most of all we favour the narrated.
However, we are not made to readily understand abstract matters - we "need" context - and randomness and uncertainty are abstractions. We tend to respect what has happened ignoring what could have happened. We are mostly shallow and superficial and we don't know it. Beaming light on the unseen is costly in both computational and mental effort but probability (a liberal act) is a child of scepticism, not a tool for people to satisfy their desire to produce fancy calculations and certainties.
To take a simple step to a higher form of life, you may have to denarrate and train your reasoning abilities to control your decisions. You'd need to train yourself to spot the difference between the sensational and the empirical, and to learn to avoid "tunnelling". When dealing with uncertainty we need to avoid "focus" because such translates into prediction problems. And prediction, not re-enaction, is the real test of our understanding of the world. In the words of Niels Bohr (Danish Physicist) - "It is tough to make predictions, especially about the future" or to put it another way...... "the future ain't what it used to be" (Yogi Berra - US Baseball Player/Coach).
As our knowledge grows it is threatened by a greater increase in confidence and thus at the same time a potential increase in confusion, ignorance and conceit. We tend to overestimate what we know and underestimate uncertainty by compressing the range of possible uncertain states (epistemic arrogance).The longer the odds the larger the likely extreme arrogance.
Additional evidence of the minutae can be useless, even toxic. The more information you give someone, the more hypotheses they are likely to form along the way and perhaps the worse off they will be. Our ideas are "sticky" and once we produce a theory we are reluctant to change our minds, and so those who delay developing their theories are probably better off. Where you develop your theories on the basis of weak evidence you will have difficulty interpreting subsequent information that contradicts those opinions, even if it is obviously accurate. The expert of course will know more, but their probabilities will be off.
Things that move (requiring knowledge) do not really have experts, while things that don't move seem to have some experts. Experts are narrowly focussed persons who need to "tunnel" and who do well in situations where "tunnelling" is safe (because "Black Swans" are not consequential). Some examples of real experts are - livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, accountants, grain inspectors, photo interpreters, insurance analysts. Examples of unreal experts are - stockbrokers, clinical psychologists, psychiatrists, court judges, councillors, personnel selectors, intelligence analysts, economists, financial forecasters, political scientists, "risk experts", financial advisors.
We live in "Extremistan" not "Mediocristan" and while our predictions may be good at predicting the ordinary, they utterly fail in predicting the irregular. Once upon a time one Philip Tetlock, a Professor of Psychology at the University of Pennsylvania, in wondering what it is about politics that makes people so dumb concluded that the hedgehog knows one thing and is married to a single big "Black Swan" event, a big bet that is not likely to play out. He focusses on a single improbable and consequential event falling for the narrative fallacy that makes us so blinded by one single outcome that we cannot imagine others. The fox on the other hand knows everything and is adaptable. Tetlock says It's better to be a fox with an open mind - I know history is going to be dominated by an improbable event, I just don't know what that event will be. In effect it's better to be more a generalist than a specialist and how you think matters more than what you think.
Plans mostly seem to fail because of "tunnelling" i.e. the neglect of sources of uncertainty outside the plan itself. The unexpected generally has a one sided effect with projects.There is always something non certain in our modern environment.
More often than not we are too focussed on matters internal to our projects (viewing the world from within our models) to take external uncertainty (unknown unknowns) into account. Unlike biological variables (say life expectancy) human projects and ventures are often scalable (i.e. from "Extremistan") and the longer you wait the longer you are expected to wait. Corporate and
government projections would need to attach a possible error rate to their scenarios. Forecasting by bureaucrats tends to be used for anxiety relief rather than for adequate policy making. We tend to make few mistakes in "Mediocristan" but to make large ones in "Extremistan" as we do not realise the consequences of the rare event.
Louis Pasteur maintained that "luck favours the prepared". The best way to get maximal exposure is to keep researching, to collect opportunities (serendipity at play?). The economist Frederick Hayek too thought that a true forecast is done organically by a system, not by feat. Society as a whole will be able to integrate into its functioning the multiple pieces of information. Society as a whole (rather than a single institution) thinks outside the box.
The quixotic overconfidence of business may be a "hidden benefit" of capitalism given that the operators in a free market can be as incompetent as they wish. However, for governments (or non scalable business) we need to try and ensure we don't all pay the price of others' folly. This could be one reason why placing business people with a high propensity for risk taking in executive positions of government or public administration may not be the brightest idea?
Generally we do not seem predisposed to respect humble people, those who try to suspend judgment. Once upon another time (1500s) a leading writer of the French Renaissance and "the father of modern scepticism" Michel Eyquim de Montaigne, penned a series of tentative, speculative, non definitive essays in which he was mainly interested in discovering things about himself which could be applied to all men - a true sceptic with charm and an awareness of the need to suspend judgment. He had this idea of Utopia as a society governed from the basis of an awareness of ignorance, not knowledge. Alas it seems one cannot easily assert authority by accepting one's own fallibility. People seem to need to be blinded by knowledge and to follow leaders who can gather people together because the advantages of being in groups trump the disadvantages of being alone. Generally It has been more profitable for us to bind together in the wrong direction than to be alone in the right one. Those who have followed the assertive idiot rather than the introspective wise person have passed us more of our genes. But psychopaths rally followers and this is something we desperately need to be constantly conscious of.
A characteristic of true autism (lacking any capacity for empathy) is an inability to "walk in the shoes" of others and view the world from their standpoint. Real social skills are impeded and others tend to be seen as inanimate objects (machines) moved by explicit rules. There's an inability to comprehend uncertainty. This is also something we need to be aware of.
It's true that in very special conditions a small input in a complex system can lead to non random large results e.g. a butterfly flapping its wings on one side of the world causing a hurricane on the other side. However, because there are billions of such small things (e.g. butterflies flapping wings) that could have caused the result it's very confusing (almost impossible) to say which one is the "culprit". In the end randomness is just unknowledge, the world is opaque and appearances fool us. As Yogi Berra said "you can observe a lot by just watching".
So what's the best we can do in practice?
- don't always withold judgment?
- don't avoid predicting (just be a fool in the right places)?
- know how to rank beliefs, not according to their plausibility but by the harm they may cause?
- be prepared for all relevant eventualities (narrow minded prediction has an analgesaic or therapeutic effect)?
- accept that a series of small failures are necessary in life (maximise the serendipity around you with lots of trial and error)?
- invest around 85% in safe spots and 15% in highly speculative spots (this is better than all in medium risk)?
- distinguish between positive and negative contingencies / "Black Swans" where uncertainty can occasionally pay off (lose small to make big) as well as where the unexpected can hit hard and hurt severely (Yogi Berra again - "you gotta be very careful if you don't know where you're going, because you might not get there")?
- invest in preparedness, not in prediction (don't look for the precise and the local)?
- maximise exposure to anything that looks like opportunity (increases the odds of serendipity)?
It might also be wise to beware of precise plans/predictions by governments and to keep an eye on their side effects e.g. regulators in the banking sector are prone to a severe expert problem and tend to condone reckless but hidden risk taking (something we're now well conscious of considering recent Irish history). The Achilles heel of capitalism = if you make corporations compete it is sometimes the one that is most exposed to the negative "Black Swan" that will appear to be the most fit for survival. Like the "wise" Yogi Berra also pointed out - "there are some people who if they don't already know, you can't tell 'em".
In the final analysis the "sensible" approach may be to take on board the idea that I will never get to know the unknown but I can always guess how it might effect me and I should base my decisions around that i.e. the central idea of uncertainty whereby in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you cannot know). If only Bush and Blair had done so!
If free markets have been successful it's probably because they allow for the trial and error process on the part of competing individual operators who fall for the narrative fallacy but are effectively partaking of a "good" project. Usually we can't figure out what's going on because of (a) epistemic arrogance and our corresponding future blindness, (b) the Platonic notion of categories or how people are fooled by reductions, particularly if they have an academic degree in an expert free discipline, and (c) flawed tools of inference, particularly the "Black Swan free" tools of "Mediocristan".
Any real "Black Swan" event has to be not just rare but also unexpected - it has to be outside our tunnel of possibilities. Random outcomes, or an arbitrary situation, can provide the initial push that leads to a "winner takes all" result. A person can get slightly ahead for entirely random reasons and because we like to imitate one another we will flock to him. The world of contagion is underestimated. Consider the socalled "Matthew effect" in many of our societies by which people take from the poor to give to the rich, reflecting the Biblical "to everyone that hath shall be given and he shall have abundance, but from him that hath not shall be taken away even that which he hath" - the rich get richer while the poor get poorer, the big get bigger and the small stay small or get relatively smaller.
To be contagious a mental category must agree with an integral part of our nature. However, a winner could be unseated by a newcomer popping up out of nowhere (randomness effect). In the circumstances it's arguable that "straight" capitalism facilitates revitalisation of the world thanks to the opportunity to be lucky. Luck is a great equaliser because almost everyone can benefit from it. History tells us that centralised socialist (socalled) governments tend to protect their monsters (firms) and by doing so kill many potential newcomers in the womb. In truth everything is transitory, being both made and unmade. And in "Extremistan" the little guy can maybe bide his time in the ante chamber of success.
The World Wide Web produces acute concentration i.e. a large number of users visit just a few sites (e.g. Google). However, in addition the Web enables the formation of a reservoir of proto Googles waiting in the background as well as promoting an inverse Google which allows people with a technical speciality to find a small stable audience. It's a fertile environment for diversity given the virtual volume/variety it can accommodate. Small guage collectivity should control a large segment of culture and commerce, thanks to the niches and subspecialities that can now survive due to the Internet, but strangely this can also imply a large measure of inequality with a large base of small groups and a very small number of supergiants (fluctuating in membership). Nevertheless the small guy here is very subversive and his impact could eventually free us from the dominant political parties, the academic system, the clusters of the press i.e. anything that is currently in the hands of ossified, conceited and self serving authority.
Cognitive diversity (variability in views and methods) acts like an engine for tinkering and works like evolution. By subverting the big structures it gets rid of the Platonified one way of doing things (and the "one size fits all" disease). With luck we can all look forward to the "bottom up and theory free" empiricist prevailing in the end.
At this point in human history we may be gliding into a new type of order/disorder where we will see more periods of calm and stability with most problems concentrated into a small number of "Black Swans". Recent warfare has decreased in probability but with an increased probability of degenerating into the total decimation of humanity. Globalisation also, while possibly reducing
some volatility and giving the appearance of stability, creates interlocking fragility i.e. the potential for devastating "Black Swans". Financial institutions have been merging into a small number of very large banks. And so many banks are now interlocked with the financial ecology swelling into gigantic, incestuous, bureaucratic conglomerates often Gaussian in their risk measurement so that when one falls they all fall. On the face of it increased concentration among banks would seem to have the effect of making financial crises less likely, but when they do happen they're more global in scale and hit hard. With the recent (in historical terms) dominance of the financial system, we have moved from a diversified ecology of small banks with varied lending policies to a more homogenous framework of firms that all resemble each other.
The rarer an event the less we can know about its odds. However, we do have some idea of how such a crisis might happen. A network is an assemblage of elements called nodes that are somehow connected to one another by a link (e.g. airports) with a few (control) nodes extremely connected, others less so. This seems to make networks more robust i.e. random insults to most parts will not be consequential since they are likely to hit a poorly connected spot, but it also makes them more vulnerable to "Black Swans" if random hits caused a problem with a major node. Perhaps we would be far better off if there was a different ecology in which financial institutions went bust on occasion and were rapidly replaced with new ones, thus mirroring the diversity of Internet business and the resilience of the Internet economy?
It has to be said that fairness is not exclusively an economic matter, and it becomes less and less so when we are satisfying our basic material needs. It appears it is pecking order that matters. The disproportionate share of the very few in intellectual influence is perhaps more unsettling than the uneven distribution of wealth, because no social policy can eliminate it. We know that people live longer in societies that have flatter social gradients. And it seems also that winners kill (indirectly) their peers, given that those in a steep social gradient live shorter lives regardless of their economic condition. "Extremistan" looks like it's here to stay, so we have to live with it and to try and find the tricks that make it more palatable.
i
The main point of the Gaussian is that most observations hover around the mediocre, the average, and the odds of a deviation decline faster and faster (exponentially) as you move away from the average. The Gaussian Bell Curve is fragile and vulnerable in the estimation of "tail" events - a sm2,all measurement error of the sigma (standard deviation) will lead to a massive underestimation of the probability.
Perhaps there are basically only 2 possible paradigms? - non scalable (e.g. Gaussian) and others (e.g. Mandelbrotian randomness) and rejecting the former vision of the world is like negative empiricism (I know a lot by determining what is wrong). In the Guassian framework inequality (say of wealth) decreases as the deviation gets larger (moving away from the mean) but with the scalable (e.g. in "Extremistan") inequality stays the same throughout - it does not slow down - for any large total the breakdown between components will be more and more asymmetric.
The Pareto Principle theorises that 80% worth of effort contributes to only 20% of results or vice versa. However, if there is inequality then those who constitute the 20% also contribute unequally i.e. only a few deliver the lion's share. Measures of uncertainty based on the Bell Curve simply disregard the possibility of sharp jumps or discontinuities and are inapplicable in "Extremistan". The Gaussian is useful where the samples are not very large and there is a rational reason for the largest variable to be not too far away from the average. If there are physical limitations preventing very large observations or there are strong forces of equilibrium bringing things back rather rapidly after conditions diverge, we end up in "Mediocristan". The rarer the event, the higher the error in our estimation of its probability, but the Gaussian Bell Curve sucks randomness out of life. The average man cannot be average in everything.
While every humanist wants to minimise the discrepancy between humans, it looks like reality is not "Mediocristan" and we have to learn to live with this. When dealing with qualitative inference - looking for yes/no answers to which magnitudes don't apply - you can assume you're in "Mediocristan" without serious problems and the impact of the improbable won't be too large. But if dealing with aggregates where magnitudes do matter (e.g. income wealth, investment returns) you will have a problem and get the wrong distribution if you use the Gaussian - one single number can disrupt all your averages. Standard deviation/variance is just a number that you scale things to, a matter of mere correspondence if phenomena were Gaussian.
The main point of the Gaussian Bell Curve is that most observations hover around the mediocre/ mean while the odds of a deviation decline exponentially (faster and faster) as we move away from the mean i.e. outliers are increasingly unlikely. This property generates the supreme law of "Mediocristan" - given the paucity of large deviations, their contribution to the total will be vanishingly small. However, if we introduce memory or skills the Gaussian becomes shaky. Probabilities do depend on history. We can't expect a simple answer to characterise uncertainty.
You stand above the rat race and the pecking order not outside of it, if you do so by choice. You have more control over your life if you decide on your actions by yourself. It's more difficult to be a loser in a game you set up yourself where you always control what you do. We tend to forget that just being alive is an extraordinary piece of good luck, a remote event, a hugely chance occurrence. You are a "Black Swan".
Benoit Mandelbrot's fractal shows a repetition of geometric patterns at different scales revealing smaller and smaller versions of themselves. The small parts resemble the whole to some degree with no qualitative change when an object changes size e.g. veins in leaves look like branches and the branches look like trees. This character of self affinity implies that one deceptively short and simple rule of iteration can be used to build shapes of seemingly great complexity e.g. in computer graphics or in nature. With the Mandelbrotian set (the famous mathematical object) you can look at a set of pictures of ever increasing complexity at smaller and smaller resolutions without ever reaching the limit. You will continue to see recognisable shapes which though never the same yet bear an affinity to one another, a strong family resemblence. Examples abound in the visual arts, music and poetry (e.g. with Emily Dickinson there's a consciously made assemblege of diction, metres, rhetoric, gestures and tones).
If I look at a rug on the floor from a standing position it appears uniform and this corresponds to "Mediocristan" and the law of large numbers. I am seeing the sum of undulations (observable closer in) and these even out, like Gaussian randomness - you reach certainties by adding up small Gaussian uncertainties. On the other hand no matter how high up you go on a mountain it remains jagged - some surfaces are not from "Mediocristan" and changing the resolution does not make them much smoother.
Unlike the Gaussian, the fractal has numerical or statistical meanings that are somewhat preserved across scales. It is easier to reject innocence than to accept it or to reject the Bell Curve than to accept it. Conversely it's more difficult to reject a fractal than to accept it because a single event can destroy the argument that we face a Gaussian Bell Curve. Realistically fractals perhaps should be our default/approximation position. They do not solve "Black Swan" problems (turning them into predictable events - only the Gaussian gives certainties) but they significantly mitigate by making such large events conceivable. There is of course a problem of induction with "Extremistan" in that if a mechanism is fractal it can deliver large volumes and therefore the incidence of large deviation is possible, but it's hard to know how possible with any degree of precision.
Social science is full of power laws (e.g. ratio exponent - "half the number double the pot") and there is universality in many of these phenomena with a wonderful similarity between various processes in nature and the behaviour of social groups. Power laws are associated with critical points and many of the properties around such points are independent of the details of the underlying dynamical system. Accordingly we might be better off applying techniques from statistical physics rather than econometric or Gaussian style non scalable distributions. However, we need also to differentiate between the forward and backward process (between the problem and the inverse problem). Not to do so would amount to a variation of the narrative fallacy - in the absence of a feedback process you would look at models and think that they confirm reality.
It seems we simply cannot read the equation that governs the world. We just observe data and make an assumption about what the real process might be....."calculating" by adjusting our equation in accordance with additional information. As events present themselves to us we can compare what we see to what we expected to see and discover that history really runs forward and not backward. In fact what we have is opacity, incompleteness of information and the invisibility of the generator of the world. Rather than history revealing its mind to us, we have to guess what's inside of it.
While many study psychology, mathematics or evolutionary theory and look for ways to profit therefrom by applying their ideas to business, it may be far better to do the exact opposite i.e. to study the intense, unilateral, humbling uncertainty in the markets so as to get insights about the nature of randomness that is applicable to psychology, probability, mathematics, decision theory and even statistical physics. You will then see the "sneaky" manifestations of the narrative fallacy, the ludic fallacy and Platonic errors (going from representation to reality).
Models need to be viewed as descriptive, not precisely predictive. We need to avoid the common mistakes made when callibrating non linear processes which have greater freedom than linear ones with the implication that you are at great risk of using the wrong model. I can make inferences about things that I do not see in my data, but those things should still belong in the realm of just possibilities. Fractal randomness is a way to make some of the "Black Swans" appear possible, to make us aware of their consequences. "You are much safer if you know where the wild animals are". But fractal randomness does not yield precise answers, it just reduces the surprises.
It is contagion that determines the fate of a theory in social science, not its validity e.g. the influential Milton Friedman argued that models do not have to have realistic assumptions to be acceptable but this simply gave/gives economists licence to produce severely defective mathematical representations of reality.
You cannot accept the Gaussian framework as well as large deviations. The entire statistical business appears to confuse absence of proof with proof of absence misunderstanding the elementary asymmetry involved - you need one single observation to reject the Gaussian, but millions of observations will not fully confirm the validity of its application. This is because the Gaussian Bell Curve disallows large deviations but the tools of "Extremistan" (the alternative) do not disallow long quiet stretches.
This all brings to mind Locke's definition of a madman as someone reasoning correctly from erroneous premises. Elegant mathematics is perfectly (100%) right. This property appeals to mechanistic minds who do not want to deal with ambiguities. Unfortunately you have to cheat to make the world fit perfect mathematics and you have to fudge your assumptions somewhere.
In summary then, Mr.Taleb invites us to accept that a sophisticated craft focussed on tricks is preferable to a failed science looking for certainties. With sceptical empiricism you care about the premises more than the theories. You want to minimise reliance on theories, stay light on your feet and reduce your surprises. You want to be broadly right rather than precisely wrong.
Elegance in the theories is often indicative of Platonicity and weakness - elegance for elegance sake. A theory is like medicine or government: often useless, sometimes necessary, always self serving and on occasion lethal. It needs to be used with care, moderation and close adult supervision.
Sean.
Dean of Quareness.
January, 2013.
PS: It seems his use of the term "Black Swan" derives from the unexpected discovery of such a bird in Western Australia, at a time when all swans were believed to be white - for a while people couldn't accept it was actually a swan.