# Successive over-relaxation

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving alinear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process. It was devised simultaneously by David M. Young and by H. Frankel in 1950 for the purpose of automatically solving linear systems on digital computers. Over-relaxation methods had been used before the work of Young and Frankel. For instance, the method of Lewis Fry Richardson, and the methods developed by R. V. Southwell. However, these methods were designed for computation by human calculators, and they required some expertise to ensure convergence to the solution which made them inapplicable for programming on digital computers. These aspects are discussed in the thesis of David M. Young.[1]

# The candle problem

http://www.ted.com Career analyst Dan Pink examines the puzzle of motivation, starting with a fact that social scientists know but most managers don’t: Traditional rewards aren’t always as effective as we think. Listen for illuminating stories — and maybe, a way forward.

The candle problem or candle task, also known as Duncker’s candle problem, is a cognitive performance test, measuring the influence of functional fixedness on a participant’s problem solving capabilities. The test was created [1] by Gestalt psychologist Karl Duncker and published posthumously in 1945. Duncker originally presented this test in his thesis on problem solving tasks at Clark University.

The test presents the participant with the following task: how to fix a lit candle on a wall (a cork board) in a way so the candle wax won’t drip onto the table below.[3] To do so, one may only use the following along with the candle:

• a book of matches
• a box of thumbtacks

The solution is to empty the box of thumbtacks, put the candle into the box, use the thumbtacks to nail the box (with the candle in it) to the wall, and light the candle with the match.[3] The concept of functional fixedness predicts that the participant will only see the box as a device to hold the thumbtacks and not immediately perceive it as a separate and functional component available to be used in solving the task.

## Response

Many of the people who attempted the test explored other creative, but less efficient, methods to achieve the goal. For example, some tried to tack the candle to the wall without using the thumbtack box,[4] and others attempted to melt some of the candle’s wax and use it as an adhesive to stick the candle to the wall.[1] Neither method works.[1] However, if the task is presented with the tacks piled next to the box (rather than inside it), virtually all of the participants were shown to achieve the optimal solution, which is self defined.[4]

The test has been given to numerous people, including M.B.A. students at the Kellogg School of Management in a study investigating whether living abroad and creativity are linked.[5]

## Glucksberg

Glucksberg (1962)[6] used a 2 × 2 design manipulating whether the tacks and matches were inside or outside of their boxes and whether subjects were offered cash prizes for completing the task quickly. Subjects who were offered no prize, termed low-drive, were told “We are doing pilot work on various problems in order to decide which will be the best ones to use in an experiment we plan to do later. We would like to obtain norms on the time needed to solve.” The remaining subjects, termed high-drive, were told “Depending on how quickly you solve the problem you can win \$5.00 or \$20.00. The top 25% of the Ss [subjects] in your group will win \$5.00 each; the best will receive \$20.00. Time to solve will be the criterion used.” (As a note, adjusting for inflation since 1962, the study’s publish year, the amounts in today’s dollars would be approximately \$39 and \$154, respectively.[7]) The empty-boxes condition was found to be easier than the filled-boxes condition: more subjects solved the problem, and those who did solve the problem solved it faster. Within the filled-boxes condition, high-drive subjects performed worse than low-drive subjects. Glucksberg interpreted this result in terms of “neobehavioristic drive theory”: “high drive prolongs extinction of the dominant habit and thus retards the correct habit from gaining ascendancy”. An explanation in terms of the overjustification effect is made difficult by the lack of a main effect for drive and by a nonsignificant trend in the opposite direction within the empty-boxes condition.

Another way to explain the higher levels of failure during the high-drive condition is that the process of turning the task into a competition for limited resources can create mild levels of stress in the subject, which can lead to the Sympathetic nervous system, otherwise known as the Fight-or-flight response, taking over the brain and body. This stress response effectively shuts down the creative thinking and problem solving areas of the brain in the prefrontal cortex.

## Linguistic implications

E. Tory Higgins and W. M. Chaires found that having subjects repeat the names of common pairs of objects in this test, but in a different and unaccustomed linguistic structure, such as “box and tacks” instead of “box of tacks”, facilitated performance on the candle problem.[3] This phrasing helps one to distinguish the two entities as different and more accessible.[3]

In a written version of the task given to people at Stanford University, Michael C. Frank and language acquisition researcher Michael Ramscar reported that simply underlining certain relevant materials (“on the table there is a candle, a box of tacks, and a book of matches…”) increases the number of candle-problem solvers from 25% to 50%.[4]

## References

1. ^ Jump up to: a b c “Dan Pink on the surprising science of motivation”. Retrieved 16 January 2010.
2. Jump up ^ Daniel Biella and Wolfram Luther. “A Synthesis Model for the Replication of Historical Experiments in Virtual Environments”. 5th European Conference on e-Learning. Academic Conferences Limited. p. 23. ISBN 978-1-905305-30-8.
3. ^ Jump up to: a b c d Richard E. Snow and Marshall J. Farr, ed. (1987). “Positive Affect and Organization”. Aptitude, Learning, and Instruction Volume 3: Conative and Affective Process Analysis. Routledge. ISBN 978-0-89859-721-9.
4. ^ Jump up to: a b c Frank, Michael. “Against Informational Atomism”. Retrieved 15 January 2010.
5. Jump up ^ “Living Outside the Box: Living abroad boosts creativity”. April 2009. Retrieved 16 January 2010.
6. Jump up ^ Glucksberg, S. (1962). “The influence of strength of drive on functional fixedness and perceptual recognition”. Journal of Experimental Psychology 63: 36–41. doi:10.1037/h0044683. PMID 13899303. edit
7. Jump up ^ Inflated values automatically calculated.

# Cognitive bias

The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972 and grew out of their experience of people’s innumeracy, or inability to reason intuitively with the greater orders of magnitude. They and their colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. They explained these differences interms of heuristics; rules which are simple for the brain to compute but introduce systematic errors. For instance the availability heuristic, when the ease with which something comes to mindis used toindicate how often (or how recently)it has been encountered.

Cognitive bias is any of a wide range of observer effects identified in cognitive science, including very basic statistical and memory errors that are common to all human beings (many of which have been discussed by Amos Tversky and Daniel Kahneman) and drastically skew the reliability of anecdotal and legal evidence. They also significantly affect the scientific method which is deliberately designed to minimize such bias from any one observer.

cognitive bias is a pattern of deviation in judgment that occurs in particular situations, which may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality.[1][2][3] Implicit in the concept of a “pattern of deviation” is a standard of comparison with what is normatively expected; this may be the judgment of people outside those particular situations, or may be a set of independently verifiable facts. A continually evolving list of cognitive biases has been identified over the last six decades of research on human judgment and decision-making in cognitive sciencesocial psychology, and behavioral economics.

Some cognitive biases are presumably adaptive, for example, because they lead to more effective actions in a given context or enable faster decisions when timeliness is more valuable than accuracy (heuristics). Others presumably result from a lack of appropriate mental mechanisms (bounded rationality), or simply from a limited capacity for information processing.

Biases can be distinguished on a number of dimensions. For example, there are biases specific to groups (such as the risky shift) as well as biases at the individual level.

Some biases affect decision-making, where the desirability of options has to be considered (e.g., Sunk Cost fallacy). Others such as Illusory correlation affect judgment of how likely something is, or of whether one thing is the cause of another. A distinctive class of biases affect memory,[12] such as consistency bias (remembering one’s past attitudes and behavior as more similar to one’s present attitudes).

Some biases reflect a subject’s motivation,[13] for example, the desire for a positive self-image leading to Egocentric bias[14] and the avoidance of unpleasant cognitive dissonance. Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as “Hot cognition” versus “Cold Cognition”, as motivated reasoning can involve a state of arousal.

Among the “cold” biases, some are due to ignoring relevant information (e.g. Neglect of probability), whereas some involve a decision or judgement being affected by irrelevant information (for example the Framing effect where the same problem receives different responses depending on how it is described) or giving excessive weight to an unimportant but salient feature of the problem (e.g., Anchoring).

The fact that some biases reflect motivation, and in particular the motivation to have positive attitudes to oneself[14] accounts for the fact that many biases are self-serving or self-directed (e.g.Illusion of asymmetric insightSelf-serving biasProjection bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and “better” in many respects, even when those groups are arbitrarily-defined (Ingroup biasOutgroup homogeneity bias).

Some cognitive biases belong to the subgroup of attentional biases which refer to the paying of increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are the Stroop Task[15][16] and the Dot Probe Task.

The following is a list of the more commonly studied cognitive biases:

For other noted biases, see List of cognitive biases.
• Framing by using a too-narrow approach and description of the situation or issue.
• Hindsight bias, sometimes called the “I-knew-it-all-along” effect, is the inclination to see past events as being predictable.
• Fundamental attribution error is the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior.
• Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions; this is related to the concept of cognitive dissonance.
• Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
• Belief bias is when one’s evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.

A 2012 Psychological Bulletin article suggests that at least 8 seemingly unrelated biases can be produced by the same information-theoretic generative mechanism.[17] It is shown that noisy deviations in the memory-based information processes that convert objective evidence (observations) into subjective estimates (decisions) can produce regressive conservatism, the conservatism (Bayesian)illusory correlationsbetter-than-average effect and worse-than-average effectsubadditivity effectexaggerated expectationoverconfidence, and the hard–easy effect.

However, as recent research has demonstrated,, even scientists who adhere to the scientific method can’t guarantee they will draw the best possible conclusions. “How could such highly-educated and precisely-trained professionals veer off the path of objectivity?” The answer is simple: Being human.

As the fields of psychology and behavioral economics have demonstrated, homo sapiens is a seemingly irrational species that appears to, more often than not, think and behave in nonsensical rather than commonsensical ways. The reason is that we fall victim to a veritable laundry list of cognitive biases that cause us to engage in distorted, imprecise and incomplete thinking which, not surprisingly, results in “perceptual distortion, inaccurate judgment or illogical interpretation” (thanks Wikipedia), and, by extension, poor and sometimes catastrophic decisions.

Well-known examples of the results of cognitive biases include the Internet, the housing and financial crises of the past decade, truly stupid use of social media by politicians, celebrities and professional athletes, the existence of the \$2.5 billion self-help industry, and, well, believing that a change in the controlling party in Washington will somehow change its toxic political culture.

What is interesting is that many of these cognitive biases must have had, at some point in our evolution, adaptive value. These distortions helped us to process information more quickly (e.g., stalking prey in the jungle), meet our most basic needs (e.g., help us find mates) and connect with others (e.g., be a part of a “tribe”).

The biases that helped us survive in primitive times when life was much simpler (e.g., life goal: live through the day) and speed of a decision rightfully trumped its absolute accuracy doesn’t appear to be quite as adaptive in today’s much more complex world. Due to the complicated nature of life these days, correctness of information, thoroughness of processing, precision of interpretation and soundness of judgment are, in most situations today, far more important than the simplest and fastest route to a judgment.

Unfortunately, there is no magic pill that will inoculate us from these cognitive biases. But we can reduce their power over us by understanding these distortions, looking for them in our own thinking and making an effort to counter their influence over us as we draw conclusions, make choices and come to decisions. In other words, just knowing and considering these universal biases (in truth, what most people call common sense is actually common bias) will make us less likely to fall victim to them.

Here are some of the most widespread cognitive biases that contaminate our ability to use common sense:

• The bandwagon effect (aka herd mentality) describes the tendency to think or act in ways because other people do. Examples include the popularity of Apple products, use of “in-group” slang and clothing style and watching the “The Real Housewives of … ” reality-TV franchise.
• The confirmation bias involves the inclination to seek out information that supports our own preconceived notions. The reality is that most people don’t like to be wrong, so they surround themselves with people and information that confirm their beliefs. The most obvious example these days is the tendency to follow news outlets that reinforce our political beliefs.
• Illusion of control is the propensity to believe that we have more control over a situation than we actually do. If we don’t actually have control, we fool ourselves into thinking we do. Examples include rally caps in sports and “lucky” items.
• The Semmelweis reflex (just had to include this one because of its name) is the predisposition to deny new information that challenges our established views. Sort of the yang to the yin of the confirmation bias, it exemplifies the adage “if the facts don’t fit the theory, throw out the facts.” An example is the “Seinfeld” episode in which George Costanza’s girlfriend simply refuses to allow him to break up with her.
• The causation bias suggests the tendency to assume a cause-effect relationship in situations in which none exists (or there is a correlation or association). An example is believing someone is angry with you because they haven’t responded to your email when, more likely, they are busy and just haven’t gotten to it yet.
• The overconfidence effect involves unwarranted confidence in one’s own knowledge. Examples include political and sports prognosticators.
• The false consensus effect is the penchant to believe that others agree with you more than they actually do. Examples include guys who assume that all guys like sexist humor.
• Finally, the granddaddy of all cognitive biases, the fundamental attribution error, which involves the tendency to attribute other people’s behavior to their personalities and to attribute our own behavior to the situation. An example is when someone treats you poorly, you probably assume they are a jerk, but when you’re not nice to someone, it’s because you are having a bad day.

Memory bias — Memory biases may either enhance or impair the recall of memory, or they may alter the content of what we report remembering. There are many memory …  > read more

Anchoring bias in decision-making — Anchoring or focalism is a term used in psychology to describe the common human tendency to rely too heavily, or “anchor,” on one trait or piece of …  > read more

Many of these biases are studied for how they affect belief formation and business decisions and scientific research.

• Bandwagon effect — the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthinkcrowd psychologyherd behaviour, and manias.
• Bias blind spot — the tendency not to compensate for one’s own cognitive biases.
• Choice-supportive bias — the tendency to remember one’s choices as better than they actually were.
• Confirmation bias — the tendency to search for or interpret information in a way that confirms one’s preconceptions.
• Congruence bias — the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses.
• Contrast effect — the enhancement or diminishment of a weight or other measurement when compared with recently observed contrasting object.
• Déformation professionnelle — the tendency to look at things according to the conventions of one’s own profession, forgetting any broader point of view.
• Endowment effect — “the fact that people often demand much more to give up an object than they would be willing to pay to acquire it”.[2]
• Exposure-suspicion bias – a knowledge of a subject’s disease in a medical study may influence the search for causes.
• Extreme aversion — most people will go to great lengths to avoid extremes. People are more likely to choose an option if it is the intermediate choice.
• Focusing effect — prediction bias occurring when people place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.
• Framing – drawing different conclusions from the same information, depending on how that information is presented.
• Hyperbolic discounting — the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, the closer to the present both payoffs are.
• Illusion of control — the tendency for human beings to believe they can control or at least influence outcomes that they clearly cannot.
• Impact bias — the tendency for people to overestimate the length or the intensity of the impact of future feeling states.
• Information bias — the tendency to seek information even when it cannot affect action.
• Irrational escalation — the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
• Loss aversion — “the disutility of giving up an object is greater than the utility associated with acquiring it”.[3] (see also sunk cost effects and Endowment effect).
• Neglect of probability — the tendency to completely disregard probability when making a decision under uncertainty.
• Mere exposure effect — the tendency for people to express undue liking for things merely because they are familiar with them.
• Obsequiousness bias – the tendency to systematically alter responses in the direction they perceive desired by the investigator.
• Omission bias — the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).
• Outcome bias — the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.
• Planning fallacy — the tendency to underestimate task-completion times. Also formulated as Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.”
• Post-purchase rationalization — the tendency to persuade oneself through rational argument that a purchase was a good value.
• Pseudocertainty effect — the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.
• Reactance – the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice.
• Selective perception — the tendency for expectations to affect perception.
• Status quo bias — the tendency for people to like things to stay relatively the same (see also Loss aversion and Endowment effect).[4]
• Unacceptability bias – questions that may embarrass or invade privacy are refused or evaded.
• Unit bias — the tendency to want to finish a given unit of a task or an item with strong effects on the consumption of food in particular
• Von Restorff effect — the tendency for an item that “stands out like a sore thumb” to be more likely to be remembered than other items.
• Zero-risk bias — the preference for reducing a small risk to zero over a greater reduction in a larger risk. It is relevant e.g. to the allocation of public health resources and the debate about nuclear power.

Many of these biases are often studied for how they affect business and economic decisions and how they affect experimental research.

• Ambiguity effect — the avoidance of options for which missing information makes the probability seem “unknown”.
• Anchoring — the tendency to rely too heavily, or “anchor,” on a past reference or on one trait or piece of information when making decisions.
• Anthropic bias — the tendency for one’s evidence to be biased by observation selection effects.
• Attentional bias — neglect of relevant data when making judgments of a correlation or association.
• Availability heuristic — a biased prediction, due to the tendency to focus on the most salient and emotionally-charged outcome.
• Clustering illusion — the tendency to see patterns where actually none exist.
• Conjunction fallacy — the tendency to assume that specific conditions are more probable than general ones.
• Gambler’s fallacy — the tendency to assume that individual random events are influenced by previous random events. For example, “I’ve flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads.”
• Hindsight bias — sometimes called the “I-knew-it-all-along” effect: the inclination to see past events as being predictable, based on knowledge of later events.
• Hostile media effect — the tendency to perceive news coverage as biased against your position on an issue.
• Illusory correlation — beliefs that inaccurately suppose a relationship between a certain type of action and an effect.
• Ludic fallacy — the analysis of chance related problems with the narrow frame of games. Ignoring the complexity of reality, and the non-gaussian distribution of many things.
• Neglect of prior base rates effect — the tendency to fail to incorporate prior known probabilities which are pertinent to the decision at hand.
• Observer-expectancy effect — when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
• Optimism bias — the systematic tendency to be over-optimistic about the outcome of planned actions. Found to be linked to the “left inferior frontal gyrus” section of the brain, and disrupting this section of the brain removes the bias. Article summarising this finding
• Overconfidence effect — the tendency to overestimate one’s own abilities.
• Positive outcome bias — a tendency in prediction to overestimate the probability of good things happening to them (see also wishful thinking, optimism bias and valence effect).
• Primacy effect — the tendency to weigh initial events more than subsequent events.
• Recency effect — the tendency to weigh recent events more than earlier events (see also ‘peak-end rule’).
• Reminiscence bump — the effect that people tend to recall more personal events from adolescence and early adulthood than from other lifetime periods.
• Rosy retrospection — the tendency to rate past events more positively than they had actually rated them when the event occurred.
• Subadditivity effect — the tendency to judge probability of the whole to be less than the probabilities of the parts.
• Telescoping effect — the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently.
• Texas sharpshooter fallacy — the fallacy of selecting or adjusting a hypothesis after the data are collected, making it impossible to test the hypothesis fairly.

Most of these biases are labeled as attributional biases.

• Actor-observer bias — the tendency for explanations for other individual’s behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation. This is coupled with the opposite tendency for the self in that one’s explanations for their own behaviors overemphasize their situation and underemphasize the influence of their personality. (see also fundamental attribution error).
• Dunning-Kruger effect — “…when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, …they are left with the mistaken impression that they are doing just fine.”[5] (See also the Lake Wobegon effect, and overconfidence effect).
• Egocentric bias — occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would.
• Forer effect (aka Barnum Effect) — the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes.
• False consensus effect — the tendency for people to overestimate the degree to which others agree with them.
• Fundamental attribution error — the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).
• Halo effect — the tendency for a person’s positive or negative traits to “spill over” from one area of their personality to another in others’ perceptions of them (see also physical attractiveness stereotype).
• Herd instinct – a common tendency to adopt the opinions and follow the behaviors of the majority to feel safer and to avoid conflict.
• Illusion of asymmetric insight — people perceive their knowledge of their peers to surpass their peers’ knowledge of them.
• Illusion of transparency — people overestimate others’ ability to know them, and they also overestimate their ability to know others.
• Ingroup bias — the tendency for people to give preferential treatment to others they perceive to be members of their own groups.
• Just-world phenomenon — the tendency for people to believe that the world is “just” and therefore people “get what they deserve.”
• Lake Wobegon effect — the human tendency to report flattering beliefs about oneself and believe that one is above average (see also worse-than-average effect, and overconfidence effect).
• Notational bias — a form of cultural bias in which a notation induces the appearance of a nonexistent natural law.
• Outgroup homogeneity bias — individuals see members of their own group as being relatively more varied than members of other groups.
• Projection bias — the tendency to unconsciously assume that others share the same or similar thoughts, beliefs, values, or positions.
• Self-serving bias — the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).
• Self-fulfilling prophecy — the tendency to engage in behaviors that elicit results which will (consciously or subconsciously) confirm our beliefs.
• System justification — the tendency to defend and bolster the status quo, i.e. existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest.
• Trait ascription bias — the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable.
• Beneffectance – perceiving oneself as responsible for desirable outcomes but not responsible for undesirable ones. (Term coined by Greenwald (1980))
• Consistency bias– incorrectly remembering one’s past attitudes and behaviour as resembling present attitudes and behaviour.
• Cryptomnesia – a form of misattribution where a memory is mistaken for imagination.
• Egocentric bias – recalling the past in a self-serving manner, e.g. remembering one’s exam grades as being better than they were, or remembering a caught fish as being bigger than it was
• Confabulation or false memory – Remembering something that never actually happened.
• Hindsight bias – filtering memory of past events through present knowledge, so that those events look more predictable than they actually were; also known as the ‘I-knew-it-all-along effect’.
• Selective Memory and selective reporting
• Suggestibility – a form of misattribution where ideas suggested by a questioner are mistaken for memory. Often a key aspect of hypnotherapy.

# Baloney Detection Kit

#### THE TEN QUESTIONS

1. How reliable is the source of the claim?
2. Does the source make similar claims?
3. Have the claims been verified by somebody else?
4. Does this fit with the way the world works?
5. Has anyone tried to disprove the claim?
6. Where does the preponderance of evidence point?
7. Is the claimant playing by the rules of science?
8. Is the claimant providing positive evidence?
9. Does the new theory account for as many phenomena as the old theory?
10. Are personal beliefs driving the claim?

#### CREDITS

This is the first video by RDFTV.
Presented by The Richard Dawkins Foundation for Reason and Science
Directed by Josh Timonen
Produced by Maureen Norton
Animation by Pew 36 Animation Studios
Music by Neal Acree
Post Production Sound by Sound Satisfaction
Supervising Sound Editor/Re-Recording Mixer: Gary J. Coppola, C.A.S.
Sound Editor: Ben Rauscher
Production Assistant: Graham Immel

# Baloney Detection

## How to draw boundaries between science and pseudoscience,

### By Michael Shermer

When lecturing on science and pseudoscience at colleges and universities, I am inevitably asked, after challenging common beliefs held by many students, “Why should we believe you?” My answer: “You shouldn’t.”

I then explain that we need to check things out for ourselves and, short of that, at least to ask basic questions that get to the heart of the validity of any claim. This is what I call baloney detection, in deference to Carl Sagan, who coined the phrase “Baloney Detection Kit.” To detect baloney–that is, to help discriminate between science and
pseudoscience–I suggest 10 questions to ask when encountering any claim.

#### 1. How reliable is the source of the claim?

Pseudoscientists often appear quite reliable, but when examined closely, the facts and figures they cite are distorted, taken out of context or occasionally even fabricated. Of course, everyone makes some mistakes. And as historian of science Daniel Kevles showed so effectively in his book The Baltimore Affair, it can be hard to detect a fraudulent signal within the background noise of sloppiness that is a normal part of the scientific process. The question is, Do the data and interpretations show signs of intentional distortion? When an independent committee established to investigate potential fraud scrutinized a set of research notes in Nobel laureate David Baltimore’s laboratory, it revealed a surprising number of mistakes. Baltimore was exonerated because his lab’s mistakes were random and nondirectional.

#### 2. Does this source often make similar claims?

Pseudoscientists have a habit of going well beyond the facts. Flood geologists (creationists who believe that Noah’s flood can account for many of the earth’s geologic formations) consistently make outrageous claims that bear no relation to geological science. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

Thomas Gold of Cornell University is notorious for his radical ideas, but he has been right often enough that other scientists listen to what he has to say. Gold proposes, for example, that oil is not a fossil fuel at all but the by-product of a deep, hot biosphere (microorganisms living at unexpected depths within the crust). Hardly any earth scientists with whom I have spoken think Gold is right, yet they do not consider him a crank. Watch out for a pattern of fringe thinking that consistently ignores or distorts data.

#### 3. Have the claims been verified by another source?

Typically pseudoscientists make statements that are unverified or verified only by a source within their own belief circle. We must ask, Who is checking the claims, and even who is checking the checkers? The biggest problem with the cold fusion debacle, for instance, was not that Stanley Pons and Martin Fleischman were wrong. It was that they announced their spectacular discovery at a press conference before other laboratories verified it. Worse, when cold fusion was not replicated, they continued to cling to their claim. Outside verification is crucial to good science.

#### 4. How does the claim fit with what we know about how the world works?

An extraordinary claim must be placed into a larger context to see how it fits. When people claim that the Egyptian pyramids and the Sphinx were built more than 10,000 years ago by an unknown, advanced race, they are not presenting any context for that earlier civilization. Where are the rest of the artifacts of those people? Where are their works of art, their weapons, their clothing, their tools, their trash? Archaeology simply does not operate this way.

#### 5. Has anyone gone out of the way to disprove the claim, or has only supportive evidence been sought?

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore disconfirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible for any of us to avoid. It is why the methods of science that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are so critical.

# Power iteration

In mathematics, the power iteration is an eigenvalue algorithm: given a matrix A, the algorithm will produce a number λ (theeigenvalue) and a nonzero vector v (the eigenvector), such that Av = λv.
The power iteration is a very simple algorithm. It does not compute a matrix decomposition, and hence it can be used when A is a very large sparse matrix. However, it will find only one eigenvalue (the one with the greatest absolute value) and it may converge only slowly.

# Arnoldi iteration

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of iterative methods. Arnoldi finds the eigenvalues of general (possibly non-Hermitianmatrices; an analogous method for Hermitian matrices is the Lanczos iteration. The Arnoldi iteration was invented by W. E. Arnoldi in 1951.
The term iterative method, used to describe Arnoldi, can perhaps be somewhat confusing. Note that all general eigenvalue algorithms must be iterative. This is not what is referred to when we say Arnoldi is an iterative method. Rather, Arnoldi belongs to a class of linear algebra algorithms (based on the idea of Krylov subspaces) that give a partial result after a relatively small number of iterations. This is in contrast to so-called direct methods, which must complete to give any useful results.
Arnoldi iteration is a typical large sparse matrix algorithm: It does not access the elements of the matrix directly, but rather makes the matrix map vectors and makes its conclusions from their images. This is the motivation for building the Krylov subspace.