Tag Archives: cognition

Flat Earther

Eratosthenes calculated the circumference of the Earth without leaving Egypt. He knew that at local noon on the summer solstice in Syene (modern Aswan, Egypt), the Sun was directly overhead. He knew this because the shadow of someone looking down a deep well at that time in Syene blocked the reflection of the Sun on the water. He measured the Sun’s angle of elevation at noon on the same day in Alexandria. The method of measurement was to make a scale drawing of that triangle which included a right angle between a vertical rod and its shadow. This turned out to be about 7°, or 1/50th of the way around a circle. Taking the Earth as spherical, and knowing both the distance and direction of Syene, he concluded that the Earth’s circumference was fifty times that distance.

His knowledge of the size of Egypt was founded on the work of many generations of surveying trips. Pharaonic bookkeepers gave a distance between Syene and Alexandria of 5,000 stadia (a figure that was checked yearly).  Some say that the distance was corroborated by inquiring about the time that it took to travel from Syene to Alexandria by camel. Carl Sagan says that Eratosthenes paid a man to walk and measure the distance. Some claim Eratosthenes used the Olympic stade of 176.4 m, which would imply a circumference of 44,100 km, an error of 10%,[16] but the 184.8 m Italian stade became (300 years later) the most commonly accepted value for the length of the stade,[16] which implies a circumference of 46,100 km, an error of 15%.[16] It was unlikely, even accounting for his extremely primitive measuring tools, that Eratosthenes could have calculated an accurate measurement for the circumference of the Earth. He made three important assumptions (none of which is perfectly accurate):

  1. That the distance between Alexandria and Syene was 5000 stadia,
  2. That the Earth is a perfect sphere.
  3. That light rays emanating from the Sun are parallel.

Eratosthenes later rounded the result to a final value of 700 stadia per degree, which implies a circumference of 252,000 stadia, likely for reasons of calculation simplicity as the larger number is evenly divisible by 60.[16] In 2012, Anthony Abreu Mora repeated Eratosthenes’ calculation with more accurate data; the result was 40,074 km, which is 66 km different (0.16%) from the currently accepted polar circumference of the Earth.

Seventeen hundred years after Eratosthenes’ death, while Christopher Columbus studied what Eratosthenes had written about the size of the Earth, he chose to believe, based on a map by Toscanelli, that the Earth’s circumference was one-third smaller. Had Columbus set sail knowing that Eratosthenes’ larger circumference value was more accurate, he would have known that the place that he made landfall was not Asia, but rather the New World.

cognitive achievement that comes at an emotional cost

November 11, 2010

By Steve Bradt, Harvard Staff Writer

People spend 46.9 percent of their waking hours thinking about something other than what they’re doing, and this mind-wandering typically makes them unhappy. So says a study that used an iPhone Web app to gather 250,000 data points on subjects’ thoughts, feelings, and actions as they went about their lives.

The research, by psychologists Matthew A. Killingsworth and Daniel T. Gilbertof Harvard University, is described this week in the journal Science.

“A human mind is a wandering mind, and a wandering mind is an unhappy mind,” Killingsworth and Gilbert write. “The ability to think about what is not happening is a cognitive achievement that comes at an emotional cost.”

The Act of Creation

Published on Apr 26, 2016
How do creative people come up with great ideas? Organizational psychologist Adam Grant studies “originals”: thinkers who dream up new ideas and take action to put them into the world. In this talk, learn three unexpected habits of originals — including embracing failure. “The greatest originals are the ones who fail the most, because they’re the ones who try the most,” Grant says. “You need a lot of bad ideas in order to get a few good ones.”

The Act of Creation is a 1964 book by Arthur Koestler. It is a study of the processes of discovery, invention, imagination and creativity in humour, science, and the arts. It lays out Koestler’s attempt to develop an elaborate general theory of human creativity.

From describing and comparing many different examples of invention and discovery, Koestler concludes that they all share a common pattern which he terms “bisociation” – a blending of elements drawn from two previously unrelated matrices of thought into a new matrix of meaning by way of a process involving comparison, abstraction and categorisation, analogies and metaphors. He regards many different mental phenomena based on comparison (such as analogies, metaphors, parables, allegories, jokes, identification, role-playing, acting, personification, anthropomorphism etc.), as special cases of “bisociation”.

The concept of bisociation has been adopted, generalised and formalised by cognitive linguists Gilles Fauconnier and Mark Turner, who developed it into conceptual blending theory

Conceptual blending, also called conceptual integration or view application, is a theory of cognition developed by Gilles Fauconnier and Mark Turner. According to this theory, elements and vital relations from diverse scenarios are “blended” in a subconscious process, which is assumed to be ubiquitous to everyday thought and language.

The development of this theory began in 1993 and a representative early formulation is found in the online article Conceptual Integration and Formal Expression. Turner and Fauconnier cite Arthur Koestler´s 1964 book The Act of Creation as an early forerunner of conceptual blending: Koestler had identified a common pattern in creative achievements in the arts, sciences and humor that he had termed “bisociation of matrices.”[1] A newer version of blending theory, with somewhat different terminology, was presented in their book The Way We Think.

 

Cognitive reflection

The Cognitive Reflection Test is a short psychological task designed to measure a person’s tendency to override an initial “gut” response that is incorrect, and to engage in further reflection to find a correct answer. More succinctly, it attempts to measure how reflective participants in the study are in regards to their own mental state. It has been found to correlate highly with measures of intelligence, such as the Intelligence Quotient test. It also correlates highly with various measures of mental heuristics. The Cognitive Reflection Test was first described in 2005 by psychologist Shane Frederick.[1][2]

According to Frederick, there are two general types of cognitive activity. The first is executed quickly without reflection, the latter requires conscious thought and effort. These are labelled “system 1” and “system 2” respectively. The Cognitive Reflection Test consists of three questions that each have an obvious response that activates system 1, but which is incorrect. The correct response requires the activation of system 2. However, in order for system two to be activated, a person must note that their first answer is incorrect, which requires them to reflect upon their own cognition.[1]

The test has been found to correlate with many measures of economic thinking, such as temporal discounting, risk preference, and gambling preference.[1] It has also been found to correlate with measures of mental heuristics, such as the gambler’s fallacy, understanding of regression to the mean, the sunk cost fallacy, and others.[2]

The following questions are known as the Cognitive Reflection Test. They come from the paper Cognitive Reflection and Decision Making by Shane Frederick (2005).

Can you answer them correctly?

  1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents.
  2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes.
  3. In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days

In a survey of 3,428 people, an astonishing 33 percent missed all three questions. Most people–83 percent–missed at least one of the questions.

Even very educated people made mistakes. Only 48 percent of MIT students sampled were able to answer all the questions correctly.

Published on Oct 8, 2015

Sources for scientific journals are provided below. New videos come out every Thursday so subscribe for more videos.
Visit my Facebook page for more bite sized tips and psychology information
https://www.facebook.com/BiteSizePsych

Also, if you like the music behind it, you should check out the musician behind it. This is his latest project:
https://www.youtube.com/watch?v=wepL1…

Sources
Cognitive reflection test
http://cbdr.cmu.edu/seminar/Frederick…

Cognitive reflection test and cognitive biases
http://www.keithstanovich.com/Site/Re…

Font in cognitive reflection test
http://pages.stern.nyu.edu/~aalter/in…

Fonts in school
https://web.princeton.edu/sites/oppla…

Fonts and cognitive strain
http://rady.ucsd.edu/faculty/seminars…

Affective neuroscience

People drew maps of body locations where they feel basic emotions (top row) and more complex ones (bottom row). Hot colors show regions that people say are stimulated during the emotion. Cool colors indicate deactivated areas. Image courtesy of Lauri Nummenmaa, Enrico Glerean, Riitta Hari, and Jari Hietanen.

People drew maps of body locations where they feel basic emotions (top row) and more complex ones (bottom row). Hot colors show regions that people say are stimulated during the emotion. Cool colors indicate deactivated areas.
Image courtesy of Lauri Nummenmaa, Enrico Glerean, Riitta Hari, and Jari Hietanen.

Mapping Emotions On The Body: Love Makes Us Warm All Over

December 30, 20134:04 PM ET
MICHAELEEN DOUCLEF

Affect is the experience of feeling or emotion.[1] Affect is a key part of the process of an organism‘s interaction with stimuli. The word also refers sometimes to affect display, which is “a facial, vocal, or gestural behavior that serves as an indicator of affect” (APA 2006).

Continue reading

Linguistics as a Window to Understanding the Brain

The ability to communicate through spoken language may be the trait that best sets humans apart from other animals. Last year researchers identified the first gene implicated in the ability to speak. This week, a team shows that the human version of this gene appears to date back no more than 200,000 years–about the time that anatomically modern humans emerged. The authors argue that their findings are consistent with previous speculations that the worldwide expansion of modern humans was driven by the emergence of full-blown language abilities.

The researchers who identified the gene, called FOXP2, showed that FOXP2 mutations cause a wide range of speech and language disabilities (ScienceNOW, 3 October 2002). In collaboration with part of this team, geneticist Svante Pääbo’s group at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, set about tracing the gene’s evolutionary history.

As a uniquely human trait, language has long baffled evolutionary biologists. Not until FOXP2was linked to a genetic disorder that caused problems in forming words could they even begin to study language’s roots in our genes. Soon after that discovery, a team at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, discovered that just two bases, the letters that make up DNA, distinguished the human and chimp versions ofFOXP2. To try to determine how those changes influenced the gene’s function, that group put the human version of the gene in mice. In 2009, they observed that these “humanized” mice produced more frequent and complex alarm calls, suggesting the human mutations may have been involved in the evolution of more complex speech.

When humanized mice and wild mice were put in mazes that engaged both types of learning,the humanized mice mastered the route to the reward faster than their wild counterparts, report Schreiweis, Graybiel, and their colleagues

The results suggest the human version of the FOXP2 gene may enable a quick switch to repetitive learning—an ability that could have helped infants 200,000 years ago better communicate with their parents. Better communication might have increased their odds of survival and enabled the new version of FOXP2 to spread throughout the entire human population, suggests Björn Brembs, a neurobiologist at the University of Regensburg in Germany, who was not involved with the work.

“The findings fit well with what we already knew about FOXP2 but, importantly, bridge the gap between behavioral, genetic, and evolutionary knowledge,” says Dianne Newbury, a geneticist at the Wellcome Trust Centre for Human Genetics in Oxford, U.K., who was not involved with the new research. “They help us to understand how the FOXP2 gene might have been important in the evolution of the human brain and direct us towards neural mechanisms that play a role in speech and language acquisition.”

Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

 

Published on Oct 6, 2012

Steven Pinker – Psychologist, Cognitive Scientist, and Linguist at Harvard University

How did humans acquire language? In this lecture, best-selling author Steven Pinker introduces you to linguistics, the evolution of spoken language, and the debate over the existence of an innate universal grammar. He also explores why language is such a fundamental part of social relationships, human biology, and human evolution. Finally, Pinker touches on the wide variety of applications for linguistics, from improving how we teach reading and writing to how we interpret law, politics, and literature.

The Floating University

 

 

Dopamine

Dopamine is an organic chemical of the catecholamine and phenethylamine families that plays several important roles in the brain and body. Its name is derived from its chemical structure: it is an amine synthesized by removing a carboxyl group from a molecule of its precursor chemical L-DOPA, which is synthesized in the brain and kidneys. Dopamine is also synthesized in plants and most multicellular animals.

In the brain, dopamine functions as a neurotransmitter—a chemical released by neurons (nerve cells) to send signals to other nerve cells. The brain includes several distinct dopamine pathways, one of which plays a major role in reward-motivated behavior. Most types of reward increase the level of dopamine in the brain, and most addictive drugs increase dopamine neuronal activity. Other brain dopamine pathways are involved in motor control and in controlling the release of various hormones. These pathways and cell groups form a dopamine system which is neuromodulatory.

Outside the central nervous system, dopamine functions in several parts of the peripheral nervous system as a local chemical messenger. In blood vessels, it inhibits norepinephrine release and acts as a vasodilator (at normal concentrations); in the kidneys, it increases sodium excretion and urine output; in the pancreas, it reduces insulin production; in the digestive system, it reducesgastrointestinal motility and protects intestinal mucosa; and in the immune system, it reduces the activity of lymphocytes. With the exception of the blood vessels, dopamine in each of these peripheral systems is synthesized locally and exerts its effects near the cells that release it.

Several important diseases of the nervous system are associated with dysfunctions of the dopamine system, and some of the key medications used to treat them work by altering the effects of dopamine. Parkinson’s disease, a degenerative condition causing tremor and motor impairment, is caused by a loss of dopamine-secreting neurons in an area of the midbrain called the substantia nigra. Its metabolic precursor L-DOPA can be manufactured, and in its pure form marketed as Levodopa is the most widely used treatment for the condition. There is evidence that schizophrenia involves altered levels of dopamine activity, and most antipsychotic drugs used to treat this are dopamine antagonists which reduce dopamine activity.[2] Similar dopamine antagonist drugs are also some of the most effective anti-nausea agents. Restless legs syndrome and attention deficit hyperactivity disorder (ADHD) are associated with decreased dopamine activity.[3] Dopaminergic stimulants can be addictive in high doses, but some are used at lower doses to treat ADHD. Dopamine itself is available as a manufactured medication for intravenous injection: although it cannot reach the brain from the bloodstream, its peripheral effects make it useful in the treatment of heart failure or shock, especially in newborn babies.

Dopamine exerts its effects by binding to and activating cell surface receptors.[8] In mammals, five subtypes of dopamine receptors have been identified, labeled from D1 to D5.[8] All of them function as metabotropic, G protein-coupled receptors, meaning that they exert their effects via a complex second messenger system.[17] These receptors can be divided into two families, known as D1-like and D2-like.[8] For receptors located on neurons in the nervous system, the ultimate effect of D1-like activation (D1 and D5) can be excitation (via opening of sodium channels) or inhibition (via opening of potassium channels); the ultimate effect of D2-like activation (D2, D3, and D4) is usually inhibition of the target neuron.[17] Consequently, it is incorrect to describe dopamine itself as either excitatory or inhibitory: its effect on a target neuron depends on which types of receptors are present on the membrane of that neuron and on the internal responses of that neuron to the second messenger cAMP.[17] D1 receptors are the most numerous dopamine receptors in the human nervous system; D2 receptors are next; D3, D4, and D5 receptors are present at significantly lower levels.[17]

Inside the brain, dopamine functions as a neurotransmitter and neuromodulator, and is controlled by a set of mechanisms common to allmonoamine neurotransmitters.[8] After synthesis, dopamine is transported from the cytosol into synaptic vesicles by a solute carrier—avesicular monoamine transporter, VMAT2.[18] Dopamine is stored in these vesicles until it is ejected into the synaptic cleft through a process called exocytosis. In most cases exocytosis is caused by action potentials, but it can also be caused by the activity of an intracellular trace amine-associated receptor, TAAR1.[16] TAAR1 is a high-affinity receptor for dopamine, trace amines, and certainsubstituted amphetamines that is located along membranes in the intracellular milieu of the presynaptic cell;[16] activation of the receptor can regulate dopamine signaling by producing reuptake inhibition and neurotransmitter efflux and inhibiting neuronal firing through a diverse set of mechanisms.[16][19]

Once in the synapse, dopamine binds to and activates dopamine receptors. These can be the D2Lh type, located on the postsynaptictarget cells or the D2Sh autoreceptor type located on the membrane of the presynaptic cell.[8] After an action potential, the dopamine molecules quickly become unbound from their receptors. They are then absorbed back into the presynaptic cell, via reuptake mediated either by the dopamine transporter or by the plasma membrane monoamine transporter.[20] Once back in the cytosol, dopamine can either be broken down by a monoamine oxidase or repackaged into vesicles by VMAT2, making it available for future release.[18]

In the brain the level of extracellular dopamine is modulated by two mechanisms: phasic and tonic transmission.[21] Phasic dopamine release, like most neurotransmitter release in the nervous system, is driven directly by action potentials in the dopamine-containing cells.[21] Tonic dopamine transmission occurs when small amounts of dopamine are released without being preceded by presynaptic action potentials.[21] Tonic transmission is regulated by a variety of factors, including the activity of other neurons and neurotransmitter reuptake.[21]

Inside the brain, dopamine plays important roles in executive functions, motor control, motivation, arousal, reinforcement, and reward, as well as lower-level functions including lactation, sexual gratification, and nausea. The dopaminergic cell groups and pathways make up the dopamine system which is neuromodulatory.

Dopaminergic neurons (dopamine-producing nerve cells) are comparatively few in number—a total of around 400,000 in the human brain[22]—and their cell bodies are confined in groups to a few relatively small brain areas.[23] However their axons project to many other brain areas, and they exert powerful effects on their targets.[23] These dopaminergic cell groups were first mapped in 1964 by Annica Dahlström and Kjell Fuxe, who assigned them labels starting with the letter “A” (for “aminergic”).[24] In their scheme, areas A1 through A7 contain the neurotransmitter norepinephrine, whereas A8 through A14 contain dopamine. The dopaminergic areas they identified are thesubstantia nigra (groups 8 and 9); the ventral tegmental area (group 10); the posterior hypothalamus (group 11); the arcuate nucleus(group 12); the zona incerta (group 13) and the periventricular nucleus (group 14).[24]

The substantia nigra is a small midbrain area that forms a component of the basal ganglia. This has two parts—an input area called thepars compacta and an output area the pars reticulata. The dopaminergic neurons are found mainly in the pars compacta (cell group A8) and nearby (group A9).[23] In humans, the projection of dopaminergic neurons from the substantia nigra pars compacta to the dorsal striatum, termed the nigrostriatal pathway, plays a significant role in the control of motor function and in learning new motor skills.[25]These neurons are especially vulnerable to damage, and when a large number of them die, the result is a parkinsonian syndrome.[26]

The ventral tegmental area (VTA) is another midbrain area. The most prominent group of VTA dopaminergic neurons projects to theprefrontal cortex via the mesocortical pathway and another smaller group projects to the nucleus accumbens via the mesolimbic pathway. Together, these two pathways are collectively termed the mesocorticolimbic projection.[23][25] The VTA also sends dopaminergic projections to the amygdala, cingulate gyrus, hippocampus, and olfactory bulb.[23][25] Mesocorticolimbic neurons play a central role in reward and other aspects of motivation.[25]

The posterior hypothalamus has dopamine neurons that project to the spinal cord, but their function is not well established.[27] There is some evidence that pathology in this area plays a role in restless legs syndrome, a condition in which people have difficulty sleeping due to an overwhelming compulsion to constantly move parts of the body, especially the legs.[27]

The arcuate nucleus and the periventricular nucleus of the hypothalamus have dopamine neurons that form an important projection—the tuberoinfundibular pathway which goes to the pituitary gland, where it influences the secretion of the hormone prolactin.[28] Dopamine is the primary neuroendocrine inhibitor of the secretion of prolactin from theanterior pituitary gland.[28] Dopamine produced by neurons in the arcuate nucleus is secreted into the hypophyseal portal system of the median eminence, which supplies thepituitary gland.[28] The prolactin cells that produce prolactin, in the absence of dopamine, secrete prolactin continuously; dopamine inhibits this secretion.[28] In the context of regulating prolactin secretion, dopamine is occasionally called prolactin-inhibiting factor, prolactin-inhibiting hormone, or prolactostatin.[28]

The zona incerta, grouped between the arcuate and periventricular nuclei, projects to several areas of the hypothalamus, and participates in the control of gonadotropin-releasing hormone, which is necessary to activate the development of the male and female reproductive systems, following puberty.[28]

An additional group of dopamine-secreting neurons is found in the retina of the eye.[29] These neurons are amacrine cells, meaning that they have no axons.[29] They release dopamine into the extracellular medium, and are specifically active during daylight hours, becoming silent at night.[29] This retinal dopamine acts to enhance the activity of cone cells in the retina while suppressing rod cells—the result is to increase sensitivity to color and contrast during bright light conditions, at the cost of reduced sensitivity when the light is dim.[29]

Basal ganglia

At the top, a line drawing of a side view of the human brain, with a cross section pulled out showing the basal ganglia structures in color near the center.  At the bottom an expanded line drawing of the basal ganglia structures, showing outlines of each structure and broad arrows for their connection pathways.

Main circuits of the basal ganglia. The dopaminergic pathway from the substantia nigra pars compacta to the striatum is shown in light blue.

The largest and most important sources of dopamine in the vertebrate brain are the substantia nigra and ventral tegmental area.[23] These structures are closely related to each other and functionally similar in many respects.[23] Both are components of the basal ganglia, a complex network of structures located mainly at the base of the forebrain.[23] The largest component of the basal ganglia is the striatum.[30] The substantia nigra sends a dopaminergic projection to the dorsal striatum, while the ventral tegmental area sends a similar type of dopaminergic projection to the ventral striatum.[23]

Progress in understanding the functions of the basal ganglia has been slow.[30] The most popular hypotheses, broadly stated, propose that the basal ganglia play a central role in action selection.[31] The action selection theory in its simplest form proposes that when a person or animal is in a situation where several behaviors are possible, activity in the basal ganglia determines which of them is executed, by releasing that response from inhibition while continuing to inhibit other motor systems that if activated would generate competing behaviors.[32] Thus the basal ganglia, in this concept, are responsible for initiating behaviors, but not for determining the details of how they are carried out. In other words, they essentially form a decision-making system.[32]

The basal ganglia can be divided into several sectors, and each is involved in controlling particular types of actions.[33] The ventral sector of the basal ganglia (containing the ventral striatum and ventral tegmental area) operates at the highest level of the hierarchy, selecting actions at the whole-organism level.[32] The dorsal sectors (containing the dorsal striatum and substantia nigra) operate at lower levels, selecting the specific muscles and movements that are used to implement a given behavior pattern.[33]

Dopamine contributes to the action selection process in at least two important ways. First, it sets the “threshold” for initiating actions.[31] The higher the level of dopamine activity, the lower the impetus required to evoke a given behavior.[31] As a consequence, high levels of dopamine lead to high levels of motor activity and impulsive behavior; low levels of dopamine lead to torpor and slowed reactions.[31] Parkinson’s disease, in which dopamine levels in the substantia nigra circuit are greatly reduced, is characterized by stiffness and difficulty initiating movement—however, when people with the disease are confronted with strong stimuli such as a serious threat, their reactions can be as vigorous as those of a healthy person.[34] In the opposite direction, drugs that increase dopamine release, such as cocaine or amphetamine, can produce heightened levels of activity, including at the extreme, psychomotor agitation and stereotyped movements.[35]

The second important effect of dopamine is as a “teaching” signal.[31] When an action is followed by an increase in dopamine activity, the basal ganglia circuit is altered in a way that makes the same response easier to evoke when similar situations arise in the future.[31] This is a form of operant conditioning, in which dopamine plays the role of a reward signal.[32]

Reward

Illustration of dopaminergic reward structures

In the reward system, reward is the attractive and motivational property of a stimulus that induces appetitive behavior (also known as approach behavior) – and consummatory behavior.[36] A rewarding stimulus is one that has the potential to cause an approach to it and a choice to be made to consume it or not.[36] Pleasure, learning (e.g., classical and operant conditioning), and approach behavior are the three main functions of reward.[36] As an aspect of reward, pleasure provides a definition of reward;[36] however, while all pleasurable stimuli are rewarding, not all rewarding stimuli are pleasurable (e.g., extrinstic rewards like money).[36][37] The motivational or desirable aspect of rewarding stimuli is reflected by the approach behavior that they induce, whereas the pleasurable component of intrinstic rewards is derived from the consummatory behavior that ensues upon acquiring them.[36] A neuropsychological model which distinguishes these two components of an intrinsically rewarding stimulus is the incentive salience model, where “wanting” or desire (less commonly, “seeking”[38]) corresponds to appetitive or approach behavior while “liking” or pleasure corresponds to consummatory behavior.[36][39][40] In human drug addicts, “wanting” becomes dissociated with “liking” as the desire to use an addictive drug increases, while the pleasure obtained from consuming it decreases due to drug tolerance.[39]

Within the brain, dopamine functions partly as a “global reward signal”, where an initial phasic dopamine response to a rewarding stimulus encodes information about the salience, value, and context of a reward.[36] In the context of reward-related learning, dopamine also functions as a reward prediction error signal, that is, the degree to which the value of a reward is unexpected.[36] According to this hypothesis of Wolfram Schultz, rewards that are expected do not produce a second phasic dopamine response in certain dopaminergic cells, but rewards that are unexpected, or greater than expected, produce a short-lasting increase in synaptic dopamine, whereas the omission of an expected reward actually causes dopamine release to drop below its background level.[36] The “prediction error” hypothesis has drawn particular interest from computational neuroscientists, because an influential computational-learning method known as temporal difference learning makes heavy use of a signal that encodes prediction error.[36] This confluence of theory and data has led to a fertile interaction between neuroscientists and computer scientists interested in machine learning.[36]

Evidence from microelectrode recordings from the brains of animals shows that dopamine neurons in the ventral tegmental area (VTA) and substantia nigra are strongly activated by a wide variety of rewarding events.[36] These reward-responsive dopamine neurons in the VTA and substantia nigra are crucial for reward-related cognition and serve as the central component of the reward system.[39][41][42] The function of dopamine varies in each axonal projection from the VTA and substantia nigra;[39] for example, the VTA–nucleus accumbens shell projection assigns incentive salience (“want”) to rewarding stimuli and its associated cues, the VTA–orbitofrontal cortex projection updates the value of different goals in accordance with their incentive salience, the VTA–amygdala and VTA–hippocampus projections mediate the consolidation of reward-related memories, and both the VTA–nucleus accumbens core and substantia nigra–dorsal striatum pathways are involved in learning motor responses that facilitate the acquisition of rewarding stimuli.[39][43]Some activity within the VTA dopaminergic projections appears to be associated with reward prediction as well.[39][43]

While dopamine has a central role in mediating “wanting” — associated with the appetitive or approach behavioral responses to rewarding stimuli, detailed studies have shown that dopamine cannot simply be equated with “liking” or pleasure, as reflected in the consummatory behavioral response.[37] Dopamine neurotransmission is involved in some but not all aspects of , since pleasure centers have been identified both within and outside the dopamine system (i.e., compartments within the nucleus accumbens shell and ventral pallidum, respectively).[37][40] For example, direct electrical stimulation of dopamine pathways, using electrodes implanted in the brain, is experienced as pleasurable, and many types of animals are willing to work to obtain it.[44] Antipsychotic drugs used to treat psychosis reduce dopamine levels and tend to causeanhedonia, a diminished ability to experience pleasure.[45] Many types of pleasurable experiences—such as sex, enjoying food, or playing video games—increase dopamine release.[46] All addictive drugs directly or indirectly affect dopamine neurotransmission in the nucleus accumbens;[39][44] these drugs increase drug “wanting”, leading to compulsive drug use, when repeatedly taken in high doses, presumably through the sensitization of incentive-salience.[40] Drugs that increase dopamine release includestimulants such as methamphetamine or cocaine. These produce increases in “wanting” behaviors, but do not greatly alter expressions of pleasure or change levels of satiation.[40][44] However, opiate drugs such as heroin or morphine produce increases in expressions of “liking” and “wanting” behaviors.[40] Moreover, animals in which the ventral tegmental dopamine system has been rendered inactive do not seek food, and will starve to death if left to themselves, but if food is placed in their mouths they will consume it and show expressions indicative of pleasure.[47]

 

need for cognition

The need for cognition (NFC), in psychology, is a personality variable reflecting the extent to which individuals are inclined towardseffortful cognitive activities.[1][2]

Need for cognition has been variously defined as “a need to structure relevant situations in meaningful, integrated ways” and “a need to understand and make reasonable the experiential world”.[3] Higher NFC is associated with increased appreciation of debate, idea evaluation, and problem solving. Those with a high need for cognition may be inclined towards high elaboration. Those with a lower need for cognition may display opposite tendencies, and may process information more heuristically, often through low elaboration.[4]

Need for cognition is closely related to the five factor model domain openness to ideas, typical intellectual engagement, and epistemic curiosity (see below). Need for cognition has also been found to correlate with higher self-esteem, masculine sex-role orientation, and psychological absorption[citation needed], while being inversely related to social anxiety.

The 18 statements from the revised Need for Cognition Scale (Cacioppo et al., 1984) used in the Wabash National Study of Liberal Arts Education are shown below. Asterisks designate the items that are reverse scored.

  1. I would prefer complex to simple problems.
  2. I like to have the responsibility of handling a situation that requires a lot of thinking.
  3. Thinking is not my idea of fun.*
  4. I would rather do something that requires little thought than something that is sure to challenge my thinking abilities.*
  5. I try to anticipate and avoid situations where there is likely a chance I will have to think in depth about something.*
  6. I find satisfaction in deliberating hard and for long hours.
  7. I only think as hard as I have to.*
  8. I prefer to think about small, daily projects to long-term ones.*
  9. I like tasks that require little thought once I’ve learned them.*
  10. The idea of relying on thought to make my way to the top appeals to me.
  11. I really enjoy a task that involves coming up with new solutions to problems.
  12. Learning new ways to think doesn’t excite me very much.*
  13. I prefer my life to be filled with puzzles that I must solve.
  14. The notion of thinking abstractly is appealing to me.
  15. I would prefer a task that is intellectual, difficult, and important to one that is somewhat important but does not require much thought.
  16. I feel relief rather than satisfaction after completing a task that required a lot of mental effort.*
  17. It’s enough for me that something gets the job done; I don’t care how or why it works.*
  18. I usually end up deliberating about issues even when they do not affect me personally.

the Universal Principles of Persuasion

Published on Nov 26, 2012

For more visit our blog at http://www.insideinfluence.com

Animation describing the Universal Principles of Persuasion based on the research of Dr. Robert Cialdini, Professor Emeritus of Psychology and Marketing, Arizona State University.

Dr. Robert Cialdini & Steve Martin are co-authors (together with Dr. Noah Goldstein) of the New York Times, Wall Street Journal and Business Week International Bestseller Yes! 50 Scientifically Proven Ways to be Persuasive.

US Amazon http://tinyurl.com/afbam9g