The Mirror Illusion

The brain is a wonderful organ. It provides us with a mostly accurate representation of the world around us, by making approximations based on sensory input and past experience. This has been mostly sufficient for our ancestors and their environment in which our brains have evolved. There are many examples of how the brain can be fooled, ranging from optical illusions, stereoscopy, to multi-sensory illusions such as virtual hands, phantom acupuncture, and the famous McGurk ba-ga experiment. What I will discuss here is something fundamentally different, which is how your brain can fool you by how it constructs the reality you perceive; and in particular, how difficult it can be to perceive it differently, even if you know the bias.

Look inside a mirror. You will see an image of yourself with a lateral but not vertical inversion, i.e., the left/right seems swapped but not up/down. A watch on your left hand shows up on the image’s right hand, but a shoe does not show up on the image’s head.

Think about it for a minute. That makes absolutely no sense, as the mirror is a piece of reflective glass, and should not discriminate between left/right and up/down. Most people have not thought through this apparent paradox. If you have not, I encourage you to take some time to think about it, and you will find that all of the obvious explanations that come to mind are in fact, incorrect. When first presented with the problem, I found myself experimenting with a mirror, closing one eye, in various orientations, and imagining different scenarios without gravity, to no avail.

What makes this problem so difficult? Simply put, we are looking in the wrong places. We look in the wrong places because what the brain constructs (and what we perceive) feels so real, we take for granted that it is real, and automatically exclude it from closer examination.

The first, and rather difficult step, is to realize that the mirror doesn’t care about direction.  You care about direction, and it is your brain that comes up with the representation, not the mirror. To illustrate this, point to the right, and the image will point in the same direction. Point up, same thing. However, point towards the mirror, and the images points back at you, in the opposite direction. The key observation is that the mirror inverts not in the left/right or up/down, but the front/back direction.

The second step, is to realize what you are actually seeing in the mirror. Imagine a cone pointed towards a wall. As you push the cone into the wall, imagine a cone growing on the other side of the wall, growing as you push, in the opposite direction. You end up with a cone, pointing towards you, on the other side of the wall. Put a red dot on the left side of the cone and a blue dot on the right side of the cone, and do the same thing. An inverted cone emerges on the other side of the wall, with a red dot and a blue dot, on the same corresponding side. Now, imagine a human face being pushed through the wall nose first, just like the cone, with the colored dots as eyes. You end up with an image of a face on the other side, inverted. The left eye is still the left eye, just flipped inside out. Although convoluted and highly counterintuitive, it is the correct interpretation of the image in the mirror.

Still having problems visualizing it? Take a latex glove and put it on your right hand. Now, take off the glove by inverting it, so it is inside out. The inverted right-hand glove is the analog of the image in the mirror, even though it looks like a left-hand glove.

The question now becomes, why do we so instinctively see a person swapped in the left/right direction, to the point where you cannot help but see it that way? The reason, simply put, is that it requires the least work from the brain. The correct interpretation (inverting), requires an incredible amount of work, as evidenced by the effort it takes simply to imagine it. There is no existing brain circuitry to do an inversion, because there was no need to do so when the brain evolved. It is far easier for the brain to represent the image as “someone” facing you rather than an inverted meaningless image. The agent detection circuitry in your brain is where the problem is, not the mirror.

Once the brain treats it as a “person”, it needs to orient the “person” in space to make sense of it. There are two main ways to mentally turn objects around in space; around a vertical axis (turning around), or around a horizontal axis (think foosball). Technically speaking, both ways are equally valid (as are any diagonal axes). Our brain will use the existing evolved circuitry, which is to turn around the vertical axis and spin the “person” around to face you, for that is what it encounters day-to-day. Interestingly enough, if one were to mentally flip around the horizontal axis, foosball-style, one would see the image as up/down inverted but not left/right inverted, further proving that the mirror does not discriminate, and the problem arises from a hardwired preference in your brain.

This example shows that something seemingly so real and right, is no more than an erroneous representation concocted by the brain. The explanation is counterintuitive, but readily verifiable, and probably enough to change your mind. Of course, this is a trivial question with a not so trivial answer, with no vested interest or emotional investment.

It makes me wonder though. I feel quite confident and passionate about many issues, far more complicated and nuanced than a piece of reflective glass. How many of those could I be completely wrong about, simply because it “feels” right? How many wrong trees could I be barking up, blissfully ignorant of the squirrel squarely perched on my back?

I wonder.

 

Yearning for a Hero

Judgments are clouded by emotion, and nothing brings out emotion like a highly charged event:

On Feb 4, 2015, TransAsia flight 235 ran into trouble shortly after takeoff and plunged into a river, clipping a taxi and bridge in the process. 43 people perished and 15 survived.

In the immediate aftermath of a catastrophe, when emotions are in overdrive, it is human nature to look for anything to lessen the pain, to divert attention, to ameliorate the situation. And what could be better that the ultimate silver lining……a hero. After all, it changes the narrative from an outright tragedy into a tale of a hero fighting to the end.

With reckless abandon, the notoriously salacious Taiwan media pushed the hero angle. In the days following this tragedy, the media and general public praised the pilot for flying along the river, avoiding populated areas, and making it less tragic than it could have been. cuterIn news reports and social media, he was elevated to a saint, a hero, a god. The general public, eager to shift their focus, gobbled the narrative, hook, line, and sinker. Anyone even remotely questioning that status quickly experienced the viciousness that only online anonymity can engender.

Did the pilot really deserve to be called a hero? Let’s take a closer look.

The first point to examine is whether the pilot was at fault to any degree. If pilot error caused the flying tube to inadvertently engage a lakebed, it would take a Stockholm Syndrome-type of twisted logic to call the pilot a hero. It would be like calling a quarterback a hero because he recovered the ball that he fumbled, and caused his team to lose by 10 points instead of 14. By declaring the pilot a hero, however, Taiwan’s media practically precluded the possibility that he was at fault. Evidently, their favorite sport is jumping to conclusions.

The final investigation report is not published yet, but preliminary reports seem to indicate that the pilot might have turned off the wrong engine (apply palm to face). This two engine plane can fly on one engine, but definitely not with none. This is eerily reminiscent of the surgeon who amputated the wrong leg.

For argument’s sake, let’s assume the pilot had no fault whatsoever. The question is, were his actions so extraordinarily courageous and altruistic to qualify him as a hero? Did he go above and beyond what is normally expected or required, and significantly risking personal welfare for the benefit of others?

The pilot seems to have maintained composure in the face of mechanical problems, and flew the plane along a river to minimize collateral damage. Some argue, that alone qualifies him as a hero, as a normal person would not be able to. This is fallacious, as a pilot should not be compared to a layperson. By that logic, a doctor would be a hero for not fainting when he is elbow deep in a patient. And a similar logic applies to prostitutes.

Pilots are trained specifically to handle emergencies; to stay calm first, then aviate, navigate, and communicate. The bar is set high, and we rightfully expect our pilots to meet that standard. Choosing to fly along an open, flat area with emergency landing potential, for example a river, is also a basic part of pilot’s emergency training (see page 2 of this FAA emergency procedure manual). This is a no-brainer; any sane person would choose an open river over a highly populated concrete jungle, with or without pilot training. Calling the pilot a hero simply for not committing mass murder by purposely plunging into buildings, is like calling a bus driver a hero for pulling over rather than intentionally plowing into the opposing lane. It cheapens the word and renders it meaningless.

The last reason is a bit more subtle. In the “trolley problem” moral thought experiment, you see a runaway trolley barreling down the tracks. Tied to the end of the tracks are 5 people, certain to die if no action is taken. There is a lever that can cause the trolley to go down an alternate track, at the end of which lies one person. The moral dilemma is a difficult choice between causing 1 to die through action or 5 to die through inaction.

In this case, the pilot is the decision maker, but in a sad twist, also sitting in the metaphorical trolley. His own life is on the line, which muddies the line between self-preservation and altruism; he is not risking anything more than he already has. In poker, it would be called a freeroll, with everything to gain and nothing additional to lose. I suspect that every reasonable person would do the same out of altruism, if not self-preservation, with few notable exceptions.

To summarize, the pilot cannot be called a hero because:

  1. He was doing his job (nothing extraordinary)
  2. His life was on the line (self-preservation)
  3. He very possibly was significantly at fault (shut off wrong engine)

The love for heroes seems to be universal, as it appears in every culture, and can be explained by evolutionary psychology. Naturally, we yearn for heroes, often with grossly misguided approaches. Genuine altruism touches upon our innermost sense of morality, telling a narrative of fellow beings who voluntarily risk life and limb for others; that perhaps one day, under the right circumstances, we ourselves might be inspired enough to be someone’s hero. It is a compelling narrative indeed.

We would not be human beings if we did not have emotions, and those emotions can be manipulated to affect judgment, sometimes by other parties, sometimes even by ourselves. False narratives provide that very emotional comfort zone, somewhere we can retreat and feel good. Everybody needs a bit of mental masturbation now and then. Just like the real thing, as long as it’s kept private, things won’t get awkward.

I am not trying to minimize the suffering of the victims in this tragedy, which is very real indeed. Looking for a silver lining that isn’t there is like looking for an excuse that doesn’t fit. Both are emotionally appealing but serve no real purpose, because in the end, a false hero provides no comfort, and explanation does not equal exculpation.

Appropriate Truths

Warning. This post is most assuredly not politically correct.

In “Origin of the Specious”, the late Irving Kristol quipped:

There are different kinds of truths for different kinds of people,….there are truths appropriate for children; truths that are appropriate for students; truths that are appropriate for educated adults; and truths that are appropriate for highly educated adults, and the notion that there should be one set of truths available to everyone is a modern democratic fallacy. It doesn’t work.

Cynical, patronizing, and arguably elitist, but not necessarily unreasonable.

We encounter different truths growing up, ranging from the tooth fairy and Santa Claus, to anthropocentrism, mind-body dualism, an eternal soul, a personal god. As our understanding of the world increases, some truths are readily discarded, some are replaced by more sophisticated versions, and some persist (of course, the “truths” discussed here are not literal).

Some truths, such as Santa Claus, seem to be discarded as one gains basic understanding of the world. Or is it so? I think that most trivial ideas are rejected not due to critical thought, but because their peers reject it. For children, the peer group is typically their classmates or friends, a group within which ideas spread and proliferate, and exerts immense influence. After all, migrant children tend to speak the language and identify with the culture of their peers, not their parents; and kids attending international school tend to come out not with their parent’s accent but a mixed accent of their peers. I posit that most children reject ideas like Santa Claus primarily because their peers reject it, with the rationalization coming later and secondary. For adults, although the peer groups may differ based on the subject, most people still follow the prevailing position of each in-group, treat the opinions or arguments presented by the in-group as more meritorious than they deserve, and ignore or discount disconfirming evidence.

Independent thinkers (relatively speaking of course), far fewer in numbers, may possess the thinking tools, but lack the knowledge, capacity, or even willingness to examine an idea properly and critically.

For example, to debunk Santa Claus, one does not need to understand mammalian aerodynamics or solve the Traveling Salesman Problem; simple, intuitive (Bayesian) probability will do. Reindeer have not been known to fly, and elves have not been known to exist. The prior probability of either is negligible (let alone both), therefore the idea can be safely rejected. Unfortunately, many “truths” or ideas, especially those involving ideology and theology, are not as easily revised or dismissed, and usually require scientific literacy, advanced logic and thinking tools, philosophical constructs, an understanding of psychology and cognitive biases, intellectual capacity (so politically incorrect!), honesty and curiosity. A few examples include intelligent design vs. evolution, global warming, alternative medicine, mind-body dualism, tabula rasa, and rational choice.

On important issues, people are often adamantly aligned with what they feel is right, apparently in itself a reason enough. It is often useful to see what a collectively disinterested group of experts think about a subject, without appealing to authority, . For example, consider this poll conducted by Pew Research (full report). It is unreasonable to expect a layperson, such as myself, to be experts on these important issues; however I believe that one should have the basic humility to at least seriously consider the views of expert scientists, who are collectively far more intelligent, and understand the issues and nuances more thoroughly. Being politically incorrect, I believe that scientific (not philosophical) issues should not be a democracy; they should be guided by relevant experts rather than popular vote. Could the experts be wrong? Could scientists fall victim to groupthink? That is a valid question, as no one person (or group) is right all the time. A more relevant question would be, who is more likely to be right? The gap of understanding between scientists and the general public is great. Case in point: when the scientists are asked “how much of a problem that the public does not know much about science”, 98 out of 99 answered as a major or minor problem, only 1 answered “not a problem”. As Adam Savage would say, “well, there’s your problem”. One the one hand, you have a consensus reached by the smartest people who dedicate their lives studying it; on the other, a gut feeling.  Assuming they differ, which one would you bet on?

Could the approach to a higher “truth” be like mountain climbing, requiring specialized tools and skills? Perhaps, like Andrew Wiles’ proof of Fermat’s Last Theorem, requiring a deep understanding of disparate fields of mathematics, with few having the wherewithal to even understand the proof given? For a rigorous examination of certain issues it may be true (some philosophical problems for example).  However I suspect that even though many issues are complex and often intentionally confusing (example), given the right tools, most people can reasonably reach higher levels of “truths”.

One of those tools is a basic understanding of how the brain works (or fails to work). The ever growing list of cognitive biases discovered by science does not paint a pretty picture. The brain is but a delusion generator, constructing a version of reality from various input signals, eloquently explained in How the Mind Works. One needs to look no further than split brain research to realize this. A recent salient example is Dressgate, which made people question the veridicality of what they see with their own eyes. Evolutionary psychology shows that this constructed version has little to do with reality, and more to do with what had conferred a survival advantage, with heuristics and approximations often being good enough. To cut through the brain’s deception requires thinking critically and scientifically, in itself an arduous and painful process, as sacred cows are slain and comforting beliefs crumble under closer examination.  As the saying goes, the will to doubt requires far more than the will to believe.

The cynic in me asserts that few would even bother with the process, much less endure the unending cognitive dissonance; the optimist in me asserts that, well, false hope is still hope.

I believe that most can attain a higher “truth”, subject to our practical limitations (bounded rationality). It is probably beneficial to most people individually and collectively as a society; however, whether it is always beneficial to everyone is a question I cannot answer. After all, the curse of knowledge is a real phenomenon – look no further than at my jargon-filled, needlessly abstracted, diabolical writing style, unintelligible to most, often including myself.  Deeper understanding does not necessarily result in greater happiness.

Bertrand Russell once said “The fundamental cause of trouble in the world today is that the stupid are cocksure while the intelligent are full of doubt”.  The intelligent may have a better understanding, but the curse is that they lose the perspective of those less informed.  It is as easy to unthink a solution as unseeing a hidden message, or unfinding Waldo.  Sadly, in the current climate, it is politically correct to “give equal representation” to the fervent and passionate Waldo deniers, metaphorically speaking.  After all, the cocksure are loud, but more importantly, they can vote.

Doomsday Machine

My blog posts are mostly about rationality and careful thinking.

This is not one of them.

In a hypothetical world where everyone is rational, one would expect better outcomes with careful, calculated actions. In reality, we are far less rational than we would care to admit; and sometimes irrationality wins.

In the classic Kubrick film “Dr. Strangelove”, the Soviets create a Doomsday Machine, which will automatically and irrevocably set off nuclear bombs and destroy Earth if one of their key targets is hit. Obviously this Doomsday Machine provides immense deterrence value. Ironically, the Soviets kept it a secret, utterly defeating its purpose.

A Doomsday Machine is the ultimate manifestation of irrationality, a willingness and commitment to go all the way. It is greatly effective as a deterrent, as the outcome is certain, terrible, and irrevocable. The key, of course, is to make everyone aware of the consequences.

Another example: in a game of chicken, two cars speed towards each other and the one who swerves away first, loses. The best way to win is to break off your steering wheel and throw it out the window conspicuously, ensuring your opponent sees it. It is worth noting that, although imitation is a form of flattery, adopting the same strategy after you see your opponent do it, is suboptimal.

Curiously, by taking away one’s own freedom to choose, the opponent’s freedom to choose is taken away as well, assuming the opponent is rational. In this case, irrationality wins.

It comes in handy on the poker table. Going all-in takes away your opponent’s freedom to bluff. Similarly, by calling a large bet early on in the game with a less than premium hand, the other players will hesitate to bluff you later, knowing you might call the bluff.

However, what is most interesting to me is not how irrationality applies to game theory, but to human emotions such as revenge (and by extension, patriotism), love, and grief. My previous post on revenge focused on the revenger’s state of mind; the omission of publicity is atypical and likely pathological, but more effective and nuanced.

To me, the most surprising of all is how it applies to grief. It seems like such a strange thing to require an explanation; after all, grief happens when you lose someone or something you love dearly. The more you love, the deeper the grief. Yet it does not explain why grief is so debilitating and intense, to the point where one cannot eat, think, or function properly. Evolutionarily this makes little sense, as one would be more vulnerable to become food for predators. Some animals seem to grieve, but not to the extent of humans. Some propose that grief forces one to plan for life after the change; this is unsatisfying as it is too intense and lasting to be useful, not to mention that it impairs one’s ability to plan.

What parent has not worried sick that something bad might happen to his/her child? That is the byproduct of love, a reminder to protect and cherish what we have. Perhaps that’s what grief is: a deterrent, an emotional Doomsday Machine. Pointless once it goes off – certain, terrible, and irrevocable. An unusual explanation, but so far the best I’ve seen.

Credit: most of the observations are from How the Mind Works by Dr. Steven Pinker, one of my favorite authors.

Deception

This is a loose English version of my Facebook post.

This thought experiment is based on Daniel Dennett’s Library of Mendel (originally from Fechner), although he used it to illustrate something completely different.

Imagine a library that has all the possible books ever written. Suppose each book is 500 pages with 40 lines each, with 50 spaces for each line. Each page will then consist of 2000 characters per page (including spaces). Say there are 100 possible characters (including space and punctuation marks), which should cover upper and lower cases of English and European variations of the alphabet.

Somewhere in the library, there is a book consisting of nothing but blank pages, and another book consisting of nothing but obscenities. It is a large, but finite, library.

Within this library you can find every book ever published, and their translations in all languages, including long-lost ancient ones. If the book you are looking for is longer than 500 pages, it can be found in the library, properly split and numbered into different volumes.

Fascinatingly enough, here you can find your biography that is 100% accurate, not only for your past, but also perfectly predict everything in the future, to the day you die. In fact, you can find it written in regular English, ebonics, limericks, or with obscenities scattered throughout.

You can also find the correct value of pi (3.14159265358979…), up to infinite precision, volume after volume. You can find it spelled out as well, like three point one four one five nine two six five and so on. Paradoxically, pi itself is infinite, however you can find it in this large but finite library.

In this library, you can find anything you want to know about the universe, from Mozart to your innermost thoughts.

Everything I have written so far is technically true. It is also completely misleading and deceptive.

  1. Choice of words. The use of “library” and “books” primes you to think of them as what you commonly encounter. In fact, the vast majority of “books” contain nothing but gibberish. The chance of you finding a volume that contains English words is astronomically small. Among these volumes, the chance of you finding a volume that contains grammatically correct sentences is also astronomically small. Among these volumes, the chances of you finding a volume that makes sense is again, astronomically small. Among these volumes, the chances of you finding a volume that is correct, is again, astronomically small. This is very different from the concept of “book” or “library” that you are used to, where every volume is meaningful and deliberately written to convey a thought. An analogy would be me pointing to a bunch of numbers and proclaiming, “within these numbers you can find the winning combination of the next 100 lottos”. The difference being that the odds are better finding the lotto numbers.
  2. The example of pi is also completely misleading. You need to know pi to the precision you want in order to find the volumes, not the other way around. Yes, pi is infinite, and the volumes are technically finite, so how does that work? It works because sooner or later, you will reuse the volumes. Specifically, a volume will be reused when a 1,000,000 digit sequence repeats and aligns. Sounds crazy, but it is a mathematical certainty.
  3. Using “your biography” induces you to be emotionally invested. It uses your narcissism against yourself. After all, who doesn’t want to know their own future? The problem is, even though such a biography exists, you wouldn’t know which one is correct, even if you could find it.

To break away from this nonsense, we need to adjust the parameters and see what happens. In Dennett’s terms, it is “turn the knobs on the intuition pump”. What happens when we reduce the number of pages from 500 to just one page? Well, the library becomes much smaller, and you are simply retrieving pages instead of volumes. What happens when we reduce it further, to just one line of 50 spaces? What happens when we reduce it to just one character?

One character? That’s easy. It’s just the original 100 character set. Everything is simply built from this character set.

In fact, we can further reduce it to 0 and 1, if we encode into ASCII or Unicode.

This thought experiment shows how framing can mislead one into thinking a certain way, how cherry picking special cases can paint a rosy picture, how the brain is not equipped to deal with large numbers (scope insensitivity), how easy it is to see meaning in randomness, and how getting emotionally involved can cloud one’s judgment. Politicians use these dirty tricks, as do weight loss commercials.

Sharpening one’s thinking tools, along with some understanding of psychology, can come in super handy.  Especially when you need to deceive others effectively.

Thoughts on Revenge

Unlike reparation, most moral philosophers regard revenge as morally unacceptable, in the sense that harm is inflicted but achieves little to nothing for the revenging party, aside from satisfaction.  Revenge can be destabilizing because the harm inflicted is very often perceived by the receiving party as not proportional to the original encroachment, and can easily descend into a vicious circle such as a blood feud.

Our evolved, primitive sense of justice is the main driver for revenge, since it seems to be consistent across different cultures, and not limited to humans.  The primary motive for humans is to seek pleasure or satisfaction by inflicting harm to the perceived offending party, and perhaps as a secondary motive, to potentially deter future offenses.  Revenge is not necessarily justice, however that is not in the scope here.

What the revenging party gains is mainly emotional.  Namely, pleasure or satisfaction from knowing that the offending party has suffered as a result of his party’s action.

An operational definition of revenge according to Wikipedia is “a harmful action against a person or group in response to a real or perceived grievance”.  I think that this definition lacks some of the key psychological requirements central to revenge.

Let’s see what makes up revenge.  For simplicity, let’s call the party seeking revenge A, and the recipient party of the revenge B.

First requirement, a perceived grievance against A, with B being the perceived offender.  Or is it?  Say B tortures a puppy unaffiliated to A, and A decided to whack B with a sledgehammer on behalf of the aforementioned puppy.  Is that considered revenge?  I would argue that it is, since A is taking pleasure in punishing B for actions that offended A, albeit  indirectly.  It would be considered as exacting revenge on a third party’s behalf.  Therefore, I would revise to be, “a direct or indirect harm or injustice perceived by A, with B being the perceived offender”.

Second requirement, intention of harm by A to B in direct response to said perceived offense.  If there is no intention, it cannot be considered revenge.  Say A accidentally runs over B with a truck unknowingly.  A can certainly take pleasure in this development, however it cannot be considered revenge, since there had not been an intention to do so FOR the original grievance.  At best it can be considered “karma”.  But not revenge.

Third requirement, formulation of a plan by A to inflict harm upon B.  Or is it?  Say B happened to walk under a piano and A saw the opportunity and decided to cut the rope holding the piano, flattening B in the process.  That would certainly be considered revenge.   There is no advance planning, only a snap decision in face of a fleeting opportunity.  So scratch that requirement.

New third requirement, actual infliction of harm upon B through conscious action or inaction by A.  Say B is drowning and A declines to act to save B.  That would certainly be considered revenge.  What about the conscious part?  Say B is drowning with 9 other people.  If A consciously decides to save others and not B, that is certainly revenge.  However if A simply clams up and does not save anyone, then it would be hard pressed to call it revenge.

Fourth requirement, derivation of satisfaction or pleasure for A from the infliction of harm to B.  Or is it?  Say B drowned as a result of A declining to act.  However after doing so,  A did not derive any pleasure, contrary to what he had thought he would.  Is that still considered revenge?  I believe most would say yes.  So what was wrong with the requirement?  I argue it is the anticipation of pleasure or satisfaction that is essential, and not the actual outcome.  Therefore, I would revise the requirement to be, “anticipation of satisfaction or pleasure for A from the infliction of harm to B”.  It is irrelevant whether said satisfaction is actually experienced or not.  So what, then, is the act of revenge without anticipation of pleasure?  I would call it a form of retribution.  It seems more like the governmental justice system, indifferent and detached.

Final requirement, B knowing or guessing to a reasonable certainty that said harm was inflicted by A, in response to a previous perceived grievance.  Or is it?  Does B have to know that the harm was inflicted by A for the revenge to be valid?  It would certainly be more satisfying to A knowing that not only had perceived justice been done but also that B was aware that it had been doled out by the wronged party.  Most would agree that this proposed requirement is not essential to revenge.  It would require significant mental discipline on A to resist the natural urge to gloat, and realize that there is no real upside to B knowing.  It is a comparatively rational form of revenge, and in my opinion the only type that ends the vicious circle.  So let’s scrap the final requirement.  Of course revenge is not only between A and B, but also to deter other parties from potential transgressions towards A.  I think that whether publicly/implicitly known or not, A would still sport a Duchenne smile.

Revenge is therefore better defined as: “Inflicted harm through conscious action or inaction in direct response to perceived direct or indirect grievance, with anticipation of satisfaction or pleasure from the infliction of said harm”.  I am probably committing great offenses to the English language here but hey, this is my blog.

Take the recent bombing at the Boston Marathon as an example.  Currently the perpetrator is unknown, and no party has claimed responsibility yet.  It is probably reasonable to speculate that it is likely to be an act of revenge, rather than a ill conceived test or stupidity gone awry.  Should the responsible party not come forward at all, I would argue that is a more effective modality of revenge.  Not that I condone the bombing in any way; this is simply my opinion on the nuances of revenge execution.

Afterword:

1. The bombing suspects are now known, but my point still stands.

2. I am ignoring a very important function of revenge, which is deterrence.  Deterrence is most effective when the revenge is done in a public (or at least implicitly public) manner.  The lust for revenge likely evolved psychologically from deterrence.

Psychology

Although I think the “Skeptical 12 Step Program” is really a 3 step program,  I think that the first step is the one most profound: “we admit that our cognition, perception, and memory are flawed, and pseudoscience and gullibility are rampant”.

Everyone knows to a certain extent that they can be fooled by others to a certain extent.  For examples, by magicians, politicians, used car salesmen, parents, spouses, etc.  In my opinion, the REAL first step is realizing how often, unconsciously, and profoundly one can be fooled by himself/herself.  Realizing that one has been fooled by others is easy.  How does one break out of self-delusion if he doesn’t even know self-delusion is possible?   As Richard Feynman said, “The first principle is that you must not fool yourself, and you are the easiest person to fool”.

There are plenty of books such as “Sleights of Mind” that are not only insightful but also entertaining, and those can be great as an introduction.  Others like “Mistakes Were Made” are interesting but belabors the point IMO.  I personally benefited greatly from taking some excellent free courses in psychology on iTunes U, and I highly recommend courses from Dr. Jeremy Wolfe and Dr. Paul Bloom. 

A long time ago when I was studying engineering, I actually considered psychology a scientifically imprecise, and by mistaken inference, a less legitimate field.  It was not until much later in life, after I got interested in behavior, psychology, and neuroscience, that led me to look into the field, and by doing so, alter my misconception.  Engineers are used to dealing with hard numbers, and although it is true that you cannot quantify thoughts, it does not diminish the insight it provides to the human mind, nor does it imply that the theories make less useful or testable predictions. 

Compared to the other bodily organs, the chunk of thinking meat between your ears is by far the most difficult to understand, and recent scientific advancements have made it possible to peer into some of the inner workings of the chunk of crumpled meat.

I’ve always wondered at what point the meat turns bad and the ability to think deteriorate.  My guess is that the CPU slows down, RAM is lost, some ROM is corrupted, and the keyboard gets stuck.  Eventually, “Inception” style,  entire levels of consciousness (e.g., this is sweet – this is sweet but is artificial and has no calories – I am consciously thinking about the fact that this sweetness is from an artificial sweetener – I am blogging about the fact that I might lose the ability to consciously think about the useless assessment of that stupid sweetener) will be lost.  Maybe it’ll be like Charlie in Flowers for Algernon.

Teleological Thinking

What differentiates humans from animals?  Apparently not just the ability to think.  I see the main difference as the ability to think about thinking, which is metacognition, or a very high level of consciousness.  Although some species have shown some rudimentary form of metacognition, humans by far are better at it.  Without being taught explicity, children can learn to take on the perspectiive of others by 5 or 6.  They are intellecutally curious, and learn to ask the who, what, when, how, and why questions, sometimes making sense, sometimes not.

In my opinion, the “why” questions have not only lead to the most knowlegde, but also to the most nonsense.  Consider this example:

*splat* (flattened cockroach)
“Daddy, why did you kill the cockroach?”
“Well the cockroach is dirty and will make you sick, so daddy got rid of it.”
“Why do cockroaches make you sick?”
“Well the cockroaches carry germs that can make you sick.”
“Why do cockroaches carry germs?”
“Well, they don’t want to, but they pick up the germs by running around.  And those germs can make you very sick and die.”
“Why do people die?”
“Uh, well Daddy doesn’t want to die and certainly doesn’t want you to die.  Daddy is here to protect you.”
“Why are there germs? Why are there cockroaches? Why do people die?”

At some point,the conversation reaches a limit of understanding, for both the kid and the dad.  The dad is faced with two choices.  Admitting he doesn’t know the answer, which is not overly appealing, or invoking a stereotypical “God” which when followed by more questions, leads down a rabbit hole of nonsense.

In the kid’s simple reasoning, the dad exists FOR the protection of the son.  The cockroach exists FOR disease spreading.  The sun exists FOR plants to grow.  The kid, dad, cockroach, and sun all exist FOR the glory of God, praising and presumably boosting His ego for lack of better explanation.  By extrapolation, there is an ultimate end, a goal, a “telos” for everything.  And sadly, many never grow out of this simple reasoning.  Confusion of cause and effect, or misattribution of causation when absent, simply does not make sense.

Knowledge is advanced through the asking and answering of successively better questions, and good questions are those that can be tested.  When faced with unprovable “why” questions such as “Why is the rock there?”, if people just answered intellectually honestly with “I don’t know” instead of “well, God put it there for your sitting pleasure”, the question would likely remain just that – a bad question.  An unwillingness to admit ignorance and teleological reasoning leads to the propagation of belief in an increasingly detailed mythical bearded superbeing who gets mad at what you do in your own privacy.

Maybe one day we will be able to get to the point we can answer more “why” questions.  By then we probably will have evolved to have a brain so big we need an extra neck to support it.  But for now, it is beyond the limit of our cognitive capacity to realistically understand.

Intuition

Intuition, simply put, is a gut feeling.  It could be based on prior knowledge, pattern recognition, an unconscious reaction, even superstition.  It is useful in making quick decisions on the spot, say, when you are alone in the jungle and hear rustling in the bushes.  But in reality, it is a lousy basis for important decisions.

Let’s look at this example.

Imagine a fictional Foobar disease, which is always fatal, not common but not overly rare either, with an overall occurrence of 0.1%.  There is a test that is exceptionally sensitive (100%), which means that if you have the disease, this test will definitely identify it.  The test also has a very low false positive rate of 1% (99% specificity).

Out of curiosity, you take the test.  It turns out positive.  Ouch.

Quick!  Based on your gut feeling, what are the chances that you have this fatal Foobar disease?

95%? 90%?

No.

The correct answer is around 9%.  The approximate calculation is as follows (for exact calculations use Bayes’ theorem):

Out of 1000 people, only 1 will actually have the disease (0.1%).  The test, with a false positive rate of 1%, is expected to incorrectly identify 10 people as having the disease, along with the 1 person that actually has the disease.  Out of the 11 people identified as positive, only 1 will actually have it.

Counterintuitive, but true.

Now try telling that to the people that just tested positive for Foobar and blew their entire life savings at the casino.

When the US Preventative Services Task Force changed the guidelines for mammogram screenings, it was based on scientific evidence.  Same thing with prostate cancer screenings (PSA test).  The test intervals were lengthened (or eliminated) because there was no evidence that it actually provided actual benefit in the general (not high-risk) population.  The public immediately fired back, simply because it is highly counterintuitive: how on earth could someone oppose extra testing?  Conspiracy theories immediately surfaced and the issue soon became a political issue instead of a fact-based discussion.

It is unrealistic to expect everyone to look into and fully understand the underlying reasons, not because of intellectual laziness, but because those reasons often lie outside their realm of expertise.  Sadly enough, the most vocal opinions are usually shouted out by those that understand the least.  And although often treated otherwise by the media, volume does not equal correctness, understanding, controversy, much less consensus.  And as elitist as it may sound, I believe that knowledge is not a democracy, and public policy (especially on complex scientific issues) should be debated and guided by relevant experts, not by popular vote.

Scientists are generally the least confrontational and least vocal group, and politically have the least influence.  And let’s face it, the jargon-laden, carefully crafted, highly qualified statements that are spewed from their facial orifices don’t exactly appeal to voters.  So politically, are we doomed, in a Darwinian sense?  I’ll go out on a limb and say no, because although suboptimal, thankfully and ironically, ignorance is global.  Politicians everywhere are elected by popularity and not intelligence or expertise, and dictators do not rule because of oversized brains.  We are no worse off if everyone else is equally as bad.  At least that is my intuition.

* afterword: Putting the issue of limited resources and fairness aside, I am not opposed against extra testing, provided that the person fully understands the implications, risks, and what the test results actually mean, if anything.  I do oppose unnecessary testing, which I define as any test that will not change the course of action.  It makes no more sense to rearrange the deck furniture on the Titanic than it does to disinfect the death row inmate’s arm before giving him a lethal injection, or to order a Pap smear for a 90 year old.

Conscious Machine

If you are reading this, chances are that you are alive, have a brain, and are conscious.  There is also a chance this is being read by a machine, which could range from a simple web crawler/indexer to a more sophisticated content/context analyzer.  In the machine case, in some way it can be considered to be “alive” (powered by electricity), but conscious?  Most would disagree.

Being alive is not easy to define but can be characterized. Consciousness, on the other hand, defies a precise definition, yet is intuitively understood by seemingly everyone.

But is consciousness what separates us from machines?  Since consciousness cannot be precisely defined, a specific test cannot be designed to test and answer that. There are working alternatives such as the Turing test, which purportedly tests for intelligence (and unintelligence) but really tests how well it simulates human interaction, and the Mirror test, which tests for self-awareness.  Neither test is satisfactory.

Can a sufficiently advanced machine be considered alive and conscious?  That is an interesting question, but I consider it irrelevant for reasons I will expand on later.

Here is a thought experiment.  There are machines that already pass the Turing Test relatively well and I can conceive of a day where it can simulate intelligence very well.  Imagine one day in the not so distant future, where someone creates an advanced computer (or robot similar to Issac Asimov’s).  This computer could perform a self-check to see the health of its components (this feature is already in most operating systems).  It could check the internet for new components and upgrades, both software and hardware.  Say it has access to electronic funds and can order components online and have someone install them, which it can then verify if they work properly.  It could be a hardware component like a redundant power system, extra batteries, robotic arms, or software upgrades, or even cloud applications.  It could respond to external stimuli.  Spontaneity can be programmed in with random actions taken, perhaps according to a cost function (redistribute resources, upgrade hardware, etc.).  It could be programmed to reproduce itself, by examining its own components, purchasing everything online, and hiring someone for assembly.  Upon completion, it could verify that all systems are working after assembly, upload its own software over the network, and authorize the final payment.  It could even seek and accept computational jobs for money online, or invest in a portfolio, to replenish the resource pool and become self-sustaining.  It could defend itself against online attacks, and prepare itself against certain circumstances (redundant power supply and critical components).  It could even order protective casing around it, or even bodyguards, I mean machineguards, to protect against physical attacks if sufficient funds are available.

This hypothetical machine could reproduce itself, maybe not organically, which I argue is irrelevant anyway.  The ultimate goal of reproduction is reached, and there are plenty of examples of effective reproduction requiring outside agents (e.g., bees and pollen).

With GPS, optical hardware, mechanical components, and recognition software, it could realize where it is in space and recognize itself in a mirror based on certain tests and feedback.

Back to the question of “can a sufficiently advanced machine be considered alive and conscious?”  I contend that this is not the right question, since there is no simple definition of either.  A better question would be, in the spirit of the Turing Test, as “could a sufficiently advanced machine, from external observation or interaction alone, be distinguishable from a living, conscious being?”

To answer this question, I think the best way is to step back and stop thinking like a human for a bit.

Stealing from Scott Adams’ example in God’s Debris (Chapter Evolution), imagine highly intelligent extra-terrestrial beings visiting Earth after an extinction event that wipes off all organic life on earth.  They find fossils and books and all the documentation of what used to be life on earth.  They also find extensive videos and logs and archives of these amazing machines in action, but the underlying code is gone forever.  Unburdened by the arbitrary earth-centric biological classification of Life – Domain – Kingdom – (blah blah) – Species, and judging from the evidence at hand alone, I contend that these aliens would consider these machines alive, and probably classify them under “inorganic life”.

This hypothetical machine meets most if not all descriptions of characteristics of life (since there is no easy definition of “life”).  I argue that without access to the underlying code of the machine, from any observational, behavioral, and external perspective, the machine is alive.  Consciousness would be an abstract concept that an alien may or may not have, but there is no reason to think that from an external viewpoint, the machines would not have consciousness.

So what, then, separates humans from a sufficiently advanced machine?  Cognition?  Sentience?  I have a simpler answer.

Three pounds of thinking meat.

 

Update: After going down the rabbit hole of hypothetical advanced Artificial Intelligence (Singularity, FAI/UFAI, Roko’s Basilisk) and its implications, I concede that the Robot in my thought experiment is crude and probably not thought out in sufficient detail.  However, for the purpose of the thought experiment, the point it makes is still valid.  I later discovered that it is very similar to the Giant Robot thought experiment as described by Dennet (Intuition Pumps and Other Tools for Thinking, 2013).