robertsapolskyinsights
Stuff and notes I find from Robert Sapolsky's lect
66 posts
Don't wanna be here? Send us removal request.
robertsapolskyinsights · 10 years ago
Photo
Tumblr media
The 12 Cognitive Biases That Prevent You From Being Rational
The human brain is capable of 1016 processes per second, which makes it far more powerful than any computer currently in existence. But that doesn’t mean our brains don’t have major limitations. The lowly calculator can do math thousands of times better than we can, and our memories are often less than useless — plus, we’re subject to cognitive biases, those annoying glitches in our thinking that cause us to make questionable decisions and reach erroneous conclusions. Here are a dozen of the most common and pernicious cognitive biases that you need to know about.
Before we start, it’s important to distinguish between cognitive biases and logical fallacies. A logical fallacy is an error in logical argumentation (e.g. ad hominem attacks, slippery slopes, circular arguments, appeal to force, etc.). A cognitive bias, on the other hand, is a genuine deficiency or limitation in our thinking — a flaw in judgment that arises from errors of memory, social attribution, and miscalculations (such as statistical errors or a false sense of probability).
Some social psychologists believe our cognitive biases help us process information more efficiently, especially in dangerous situations. Still, they lead us to make grave mistakes. We may be prone to such errors in judgment, but at least we can be aware of them. Here are some important ones to keep in mind.
Tumblr media
Confirmation Bias We love to agree with people who agree with us. It’s why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes. We tend to be put off by individuals, groups, and news sources that make us feel uncomfortable or insecure about our views — what the behavioral psychologist B. F. Skinner called cognitive dissonance. It’s this preferential mode of behavior that leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel our pre-existing views, while at the same time ignoring or dismissing opinions — no matter how valid — that threaten our world view. And paradoxically, the internet has only made this tendency even worse.
Tumblr media
Ingroup Bias Somewhat similar to the confirmation bias is the ingroup bias, a manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called “love molecule.” This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value of our immediate group at the expense of people we don’t really know.
Tumblr media
Gambler’s Fallacy It’s called a fallacy, but it’s more a glitch in our thinking. We tend to put a tremendous amount of weight on previous events, believing that they’ll somehow influence future outcomes. The classic example is coin-tossing. After flipping heads, say, five consecutive times, our inclination is to predict an increase in likelihood that the next coin toss will be tails — that the odds must certainly be in the favor of heads. But in reality, the odds are still 50/50. As statisticians say, the outcomes in different tosses are statistically independent and the probability of any outcome is still 50%.
Relatedly, there’s also the positive expectation bias — which often fuels gambling addictions. It’s the sense that our luck has to eventually change and that good fortune is on the way. It also contribues to the “hot hand” misconception. Similarly, it’s the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one.
Tumblr media
Post-Purchase Rationalization Remember that time you bought something totally unnecessary, faulty, or overly expense, and then you rationalized the purchase to such an extent that you convinced yourself it was a great idea all along? Yeah, that’s post-purchase rationalization in action — a kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer’s Stockholm Syndrome, it’s a way of subconsciously justifying our purchases — especially expensive ones. Social psychologists say it stems from the principle of commitment, our psychological desire to stay consistent and avoid a state of cognitive dissonance.
Tumblr media
Neglecting Probability Very few of us have a problem getting into a car and going for a drive, but many of us experience great trepidation about stepping inside an airplane and flying at 35,000 feet. Flying, quite obviously, is a wholly unnatural and seemingly hazardous activity. Yet virtually all of us know and acknowledge the fact that the probability of dying in an auto accident is significantly greater than getting killed in a plane crash — but our brains won’t release us from this crystal clear logic (statistically, we have a 1 in 84 chance of dying in a vehicular accident, as compared to a 1 in 5,000 chance of dying in an plane crash [other sources indicate odds as high as 1 in 20,000]). It’s the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.
This is what the social psychologist Cass Sunstein calls probability neglect— our inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities, while forcing us to overrate more dangerous ones.
Tumblr media
Observational Selection Bias This is that effect of suddenly noticing things we didn’t notice that much before — but we wrongly assume that the frequency has increased. A perfect example is what happens after we buy a new car and we inexplicably start to see the same car virtually everywhere. A similar effect happens to pregnant women who suddenly notice a lot of other pregnant women around them. Or it could be a unique number or song. It’s not that these things are appearing more frequently, it’s that we’ve (for whatever reason) selected the item in our mind, and in turn, are noticing it more often. Trouble is, most people don’t recognize this as a selectional bias, and actually believe these items or events are happening with increased frequency — which can be a very disconcerting feeling. It’s also a cognitive bias that contributes to the feeling that the appearance of certain things or events couldn’t possibly be a coincidence (even though it is).
Tumblr media
Status-Quo Bias We humans tend to be apprehensive of change, which often leads us to make choices that guarantee that things remain the same, or change as little as possible. Needless to say, this has ramifications in everything from politics to economics. We like to stick to our routines, political parties, and our favorite meals at restaurants. Part of the perniciousness of this bias is the unwarranted assumption that another choice will be inferior or make things worse. The status-quo bias can be summed with the saying, “If it ain’t broke, don’t fix it” — an adage that fuels our conservative tendencies. And in fact, some commentators say this is why the U.S. hasn’t been able to enact universal health care, despite the fact that most individuals support the idea of reform.
Tumblr media
Negativity Bias People tend to pay more attention to bad news — and it’s not just because we’re morbid. Social scientists theorize that it’s on account of our selective attention and that, given the choice, we perceive negative news as being more important or profound. We also tend to give more credibility to bad news, perhaps because we’re suspicious (or bored) of proclamations to the contrary. More evolutionarily, heeding bad news may be more adaptive than ignoring good news (e.g. “saber tooth tigers suck” vs. “this berry tastes good”). Today, we run the risk of dwelling on negativity at the expense of genuinely good news. Steven Pinker, in his book The Better Angels of Our Nature: Why Violence Has Declined, argues that crime, violence, war, and other injustices are steadily declining, yet most people would argue that things are getting worse — what is a perfect example of the negativity bias at work.
Tumblr media
Bandwagon Effect Though we’re often unconscious of it, we love to go with the flow of the crowd. When the masses start to pick a winner or a favorite, that’s when our individualized brains start to shut down and enter into a kind of “groupthink” or hivemind mentality. But it doesn’t have to be a large crowd or the whims of an entire nation; it can include small groups, like a family or even a small group of office co-workers. The bandwagon effect is what often causes behaviors, social norms, and memes to propagate among groups of individuals — regardless of the evidence or motives in support. This is why opinion polls are often maligned, as they can steer the perspectives of individuals accordingly. Much of this bias has to do with our built-in desire to fit in and conform, as famously demonstrated by the Asch Conformity Experiments.
Tumblr media
Projection Bias As individuals trapped inside our own minds 24/7, it’s often difficult for us to project outside the bounds of our own consciousness and preferences. We tend to assume that most people think just like us — though there may be no justification for it. This cognitive shortcoming often leads to a related effect known as the false consensus bias where we tend to believe that people not only think like us, but that they also agree with us. It’s a bias where we overestimate how typical and normal we are, and assume that a consensus exists on matters when there may be none. Moreover, it can also create the effect where the members of a radical or fringe group assume that more people on the outside agree with them than is the case. Or the exaggerated confidence one has when predicting the winner of an election or sports match.
Tumblr media
The Current Moment Bias We humans have a really hard time imagining ourselves in the future and altering our behaviors and expectations accordingly. Most of us would rather experience pleasure in the current moment, while leaving the pain for later. This is a bias that is of particular concern to economists (i.e. our unwillingness to not overspend and save money) and health practitioners. Indeed, a 1998 study showed that, when making food choices for the coming week, 74% of participants chose fruit. But when the food choice was for the current day, 70% chose chocolate.
Tumblr media
Anchoring Effect Also known as the relativity trap, this is the tendency we have to compare and contrast only a limited set of items. It’s called the anchoring effect because we tend to fixate on a value or number that in turn gets compared to everything else. The classic example is an item at the store that’s on sale; we tend to see (and value) the difference in price, but not the overall price itself. This is why some restaurant menus feature very expensive entrees, while also including more (apparently) reasonably priced ones. It’s also why, when given a choice, we tend to pick the middle option — not too expensive, and not too cheap.
Source: io9
3K notes · View notes
robertsapolskyinsights · 10 years ago
Quote
The challenge for us all is to see life as a verb, not a noun. We cannot hold on to the fluid river of life, guarantee the certainty of facts, the universality of rules. We need to not only tolerate ambiguity, but learn to treasure its secrets.
Reflections on The Mindful Brain (Daniel J. Siegel, M.D.)
32 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
You are your synapses
Joseph LeDoux (in Synaptic Self)
278 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
You are your synapses
Joseph LeDoux (in Synaptic Self)
278 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
Biology gives you a brain. Life turns it into a mind.
Jeffrey Eugenides (via neuromorphogenesis)
1K notes · View notes
robertsapolskyinsights · 10 years ago
Link
Guest Post by Wil Forbis
In 1994, neuroscientist Antonio Damasio published Descartes’ Error —- Emotion, Reason, and the Human Brain. The book was a conversational rumination on neuroscience; at its core was Damasio’s assertion that human emotion is a sensory experience. It is felt in the...
63 notes · View notes
robertsapolskyinsights · 10 years ago
Text
on suicides and depression
Worldwide, up to one million people die by suicide every year, according to the World Health Organization. In the last 45 years, suicide rates have increased by 60%, says the WHO, and suicide is among the three leading causes of death among people aged 15 to 44. In a study published in 2011, an international team of health researchers said India had the worst rate of severe depression of 18 countries surveyed, with 36% of respondents showing at least five symptoms, such as loss of appetite and a sense of worthlessness, for a period of two weeks or more...
7 notes · View notes
robertsapolskyinsights · 10 years ago
Text
Depression, antidepressants, and the shrinking hippocampus by Robert M. Sapolsky*
Author information ► Copyright and License information ►
See the article "Stress-induced changes in cerebral metabolites, hippocampal volume, and cell proliferation are prevented by antidepressant treatment with tianeptine" on page 12796.
This article has been cited by other articles in PMC.
Throughout human history, it has been apparent that few medical maladies are as devastating in their effects as major depression. And since the 1950s, with the advent of the first generation of antidepressants, it has been apparent that depression is a biological disorder. This has generated the tremendous intellectual challenge of how to understand the material, reductive bases of a disease of malignant sadness.
Both the tragic components and the intellectual challenge of depression have deepened in the last decade with a series of high-visibility reports that indicate prolonged, major depression is associated with atrophy within the central nervous system. A report in this issue of PNAS by Czéh et al. (1) adds support to a possible route for reversing these morphological changes.
Such atrophy is centered in a brain region called the hippocampus. This structure plays a critical role in learning and memory, and the magnitude of the hippocampal volume loss (nearly 20% in some reports; refs. 2–4) helps explain some well-documented cognitive deficits that accompany major depression. These were careful and well-controlled studies, in that the atrophy was demonstrable after controlling for total cerebral volume and could be dissociated from variables such as history of antidepressant treatment, electroconvulsive therapy, or alcohol use. Moreover, more prolonged depressions were associated with more severe atrophy.
These findings of hippocampal atrophy raise immediate questions. First, is it permanent? Tentatively, this appears to be the case, as the atrophy persisted for up to decades after the depressions were in remission. In addition, the extent of atrophy did not lessen with increasing duration of remission (2–4).
Next, does the hippocampal atrophy arise as a result of depression, or does it precede and even predispose toward depression? There is little evidence for the latter (discussed in ref. 5), and most in the field tacitly assume that this morphological change is a consequence of the biology underlying the affective (mood) aspects of the disease.
More challenging, what are the cellular bases of the persistent atrophy? Some plausible candidate mechanisms exist, all built around the numerous ways in which major depression is, ultimately, a stress-related disorder. Sustained stress has three relevant adverse effects on hippocampal morphology. First, it can cause retraction of dendritic processes in hippocampal neurons (reviewed in ref. 6). Although this could cause atrophy of total hippocampal volume secondary to loss of neuropil volume, it is unlikely to be relevant here, in that the retraction readily reverses with the abatement of stress. A second adverse effect of stress is the inhibition of neurogenesis in the adult hippocampus (reviewed in ref. 7). Finally, in some, but not all, studies sustained stress can cause loss of preexisting hippocampal neurons (i.e., neurotoxicity) (reviewed in ref. 8). Both stress-induced inhibition of neurogenesis and/or neurotoxicity could be relevant to the hippocampal atrophy. A number of heroically obsessive studies have reported the results of postmortem cell counts in frontal cortical regions of the brains of depressives, indicating cell loss (9, 10); similar studies must be done in the hippocampus to determine which cellular mechanism(s) underlies the volume loss.
An even more challenging question is what is the proximal cause of the volume loss. A usual suspect is the class of hormones called glucocorticoids (with the human version being cortisol). These steroids are secreted by the adrenal gland in response to stress, and decades of work have shown them to have a variety of adverse effects in the brain, centered in the hippocampus (which contains considerable quantities of receptors for glucocorticoids). The effects include retraction of dendritic processes, inhibition of neurogenesis, and neurotoxicity (reviewed in ref. 8). Moreover, hippocampal volume loss occurs in Cushing's syndrome (in which there is hypersecretion of cortisol, secondary to a tumor) (11). In addition, about half of individuals with major depression hypersecrete cortisol. Finally, the individuals in these studies demonstrating hippocampal atrophy were most likely to have suffered from the subtype of depression with the highest rates of hypercortisolism (2, 3). Thus, considerable correlative evidence implicates glucocorticoids. Nonetheless, no study has yet demonstrated that such atrophy only occurs, or even is more likely to occur, among depressives who are hypercortisolemic.
With these various pieces emerging in recent years, another reasonable question is whether anything can be done about the atrophy, and this is where the exciting findings of Czéh et al. (1) come in. A number of studies using rodents indicate that some of the standard treatments for depression, namely administration of antidepressant drugs or the use of electroconvulsive therapy, have effects on the hippocampus that should counter those reported in major depression. For example, one class of antidepressant drugs prevents stress-induced retraction of dendritic processes (12, 13). In addition, both antidepressant drugs and electroconvulsive therapy increase adult neurogenesis in the hippocampus (14, 15). The work of Czéh et al. represents an important extension of these findings in two ways. First, they now report similar effects of an antidepressant drug in the primate hippocampus. And critically, this is the first such demonstration with an animal model of depression, rather than in “undepressed” subjects.
The study involved tree shrews, a prosimian primate that the authors have long used in a model of depression induced by psychosocial conflict and social subordinance (16). Subjects underwent 5 weeks of such stress, with treatment during the last four with vehicle or the antidepressant tianeptine. Thus, in a way that is obviously artificial, the time course of stress and antidepressant treatment roughly models what a depressed and medicated human might experience.
The authors first demonstrated that in animals not treated with tianeptine, psychosocial stress induced some neurobiological and physiological alterations reminiscent of those seen in human depressives. Basal cortisol levels increased ≈50%. Proton magnetic resonance spectroscopy of the cerebrum indicated 13–15% decreases in measures of neuronal viability and function (the neuroaxonal marker N-acetyle-aspartate), cerebral metabolism (creatine and phosphocreatine), and membrane turnover (choline-containing compounds). In contrast, there was no change in a glial marker of viability (myo-inositol). Furthermore, psychosocial stress caused a roughly 30% decrease in proliferation of new cells in the hippocampus. Finally, such stress was associated with a nonsignificant trend toward a decrease in total hippocampal volume.
Then, to complete the story, the authors showed that tianeptine prevented many of these stress-induced changes. These included the spectroscopic alterations, the inhibition of cell proliferation, and a significant increase in hippocampal volume (as compared with stress + vehicle animals). Of significance (see below), tianeptine did not prevent the stress-induced rise in cortisol levels.
Overall, these are impressive and important findings. Czéh et al. have shown that a primate model of stress-induced “depression” induces signs of decreased neuronal metabolism and function, as well as decreased cell proliferation. Moreover, the fact that there was only a trend toward decreased hippocampal volume is readily explained as reflecting the relatively short duration of the stressor; human studies suggest that hippocampal atrophy is demonstrable only after major depression on the scale of years. Finally, the authors show that antidepressant treatment prevents these neurobiological alterations.
Naturally, these findings raise some questions, and a number of pieces of this puzzle do not yet fit in place.
At first glance, one exciting implication of this study is the suggestion that the hippocampal volume loss in prolonged depression arises from inhibition of hippocampal cell proliferation, and that antidepressant treatment normalizes the former by preventing the latter. However, the careful data of Czéh et al. argue against this idea, at least in their model. Neurogenesis in the adult hippocampus is restricted to the subgranular zone, and newborn neurons appear to migrate only as far as the nearby dentate granule layer. For hippocampal neuroanatomy neophytes, this means that the revolution in adult neurogenesis occurs entirely in a fairly small subsection of the hippocampus; there has been some debate over just how much adult neurogenesis occurs and how much turnover there is in adult dentate gyrus neurons (17). Thus, if changes in overall hippocampal volume are secondary to changes in cell proliferation, one would predict that (i) psychosocial stress would lead to a marked reduction in the volume of the dentate granule layer, and (ii) this would be prevented by tianeptine. Instead, neither was observed.
It is not immediately obvious how much these findings generalize to other antidepressants. The vast majority of antidepressants in clinical use work by increasing the synaptic availability of monoamine neurotransmitters. Although the best known of these are the specific serotonin reuptake inhibitors such as Prozac, other efficacious drugs also block the reuptake of norepinephrine and/or dopamine. Nicely commensurate with the involvement of serotonin, there is some evidence that increased serotonin availability can stimulate cell proliferation in the hippocampus (18, 19). However, tianeptine is a distinctly atypical antidepressant (with, reputedly, only limited clinical efficacy), which increases serotonin reuptake. Thus, it decreases synaptic serotonin concentrations, rather than enhancing them.
Embedded in the human clinical studies is more evidence that these findings may not automatically extend to other antidepressants. In the broadest statement of what the current study suggests, administration of antidepressants not only can cure the affective symptoms of depression, but also can reverse some disquieting neurobiological correlates of depression as well. However, it should be recalled that the original studies linking depression with hippocampal atrophy did not demonstrate such atrophy in depressed individuals. Instead, they demonstrated the link in individuals years or decades into remission from depression, with such remissions arising, in most cases, from the therapeutic efficacy of antidepressant drugs (2–4). Tianeptine was introduced only recently and currently is used only in Europe. Thus, the human literature (in which all studies were from American-based groups) suggests that hippocampal atrophy can still occur in depression (and persist despite depression remission) in individuals treated with the older, more traditional antidepressants.
A final set of questions swirl around the complex issue of causal links among the correlates uncovered. Which factors contribute to and which are consequences of depression? A number of scenarios can be constructed. In the first (Fig. ​(Fig.11A), an array of interacting factors involving stress and a biological vulnerability give rise to a depression and its associated affective symptoms (arrow 1). Hypercortisolism occurs in approximately half of subjects. An extensive literature demonstrates that such hypercortisolism can be both a response to the stressors preceding depression (arrow 2) and to depression itself (arrow 3), and can, in turn, contribute to the affective symptomology (arrow 4) (20). In this model, these symptoms give rise to the hippocampal abnormalities (arrow 5), which then contribute to the cognitive deficits of sustained depression (arrow 6).
Figure 1
Schematic representations of three different models relating the affective and cognitive symptoms of depression with the morphological and functional changes in the hippocampus. See text for fuller explanation.
In a second, related scenario (Fig. ​(Fig.11B), the affective symptoms and hypercortisolism arise for the same reasons as in Fig. ​Fig.11A. In this model, the hypercortisolism is directly responsible for the structural and functional alterations in the hippocampus (Fig. ​(Fig.11B, arrow 5).
Most in the field, I suspect, would subscribe to some version of Fig. ​Fig.11 A or B. Some investigators, however, have posited a very different model (cf. ref. 21; Fig. ​Fig.11C), one in which there is impaired hippocampal neurogenesis as a starting point (reflecting some sort of developmental abnormality). In this model, such blunted neurogenesis precedes and predisposes toward depression and its affective and cognitive symptoms (Fig. ​(Fig.11C, arrow 1), and the loss of overall hippocampal volume is a direct consequence of the impaired neurogenesis (Fig. ​(Fig.11C, arrow 2). In variants on this model, the hypercortisolism may or may not precede the impaired neurogenesis, and may or may not directly contribute to it. Most in the field appear to be skeptical about this model, in part, because there is little biological rationale connecting the rate of neurogenesis in the hippocampus with affective states such as grief, helplessness, and anhedonia. Moreover, there is a problem with specificity: whereas antidepressants (in addition to often curing the affective symptoms of depression) increase rates of neurogenesis, the drug lithium (in addition to often curing the symptoms of mania) increases rates of neurogenesis (22).
What do the findings of Czéh et al. suggest about these models? Given the obvious caveat that psychosocial stress in tree shrews cannot be identical to a major human depression, they suggest a number of things. Their data fit well with Fig. ​Fig.11A. The specific findings do not allow one to distinguish between tianeptine preventing the hippocampal alterations by blocking the link between stress and affective depression (i.e., Fig. ​Fig.11A, arrow 1), or by preventing the link between the affective symptoms and the hippocampus (Fig. ​(Fig.11A, arrow 5). Although there is next to nothing known about the biology of what might create arrow 5 in Fig. ​Fig.11A, arrow 1 is well understood and constitutes the primary point where antidepressants are traditionally thought to exert their action.
The data of Czéh et al. also offer some limited support for Fig. ​Fig.11B. The “depressed” animals in their study demonstrated elevated cortisol levels. However, as noted, tianeptine treatment did not block such hypercortisolism. Thus, if the cortisol excess does indeed contribute to the hippocampal changes (the premise of Fig. ​Fig.11B), tianeptine must be blocking the effects of cortisol (i.e., Fig. ​Fig.11B, arrow 5). Of note, a variety of more traditional antidepressants have been shown to decrease cortisol levels (cf. refs. 23 and 24). It is a matter of debate whether they accomplish this by blunting arrow 2 and/or arrow 3 in Fig. ​Fig.11B. There also has been the speculation that antidepressants decrease the affective symptoms of depression by blocking arrow 2, and thus arrow 4 in Fig. ​Fig.11B (25).
Finally, the data of Czéh and colleagues are not compatible with Fig. ​Fig.11C. Most obviously, they demonstrate that in a randomly selected population of subjects, psychosocial stress, with depressive-like symptoms as an intermediary factor, can impair hippocampal neurogenesis, a relationship that is opposite to the flow of arrows in Fig. ​Fig.11C. Potentially, a limited version of that model might hold in explaining their data. This would be the case if the subset of animals starting off with the lowest basal rate of neurogenesis was most vulnerable to this psychosocial stress model. Current techniques make such a prospective study impossible.
Obviously, more research is needed. It would be a boon to biological psychiatry if any antidepressants can prevent some of the neurobiological correlates of depression, in addition to alleviating the affective symptoms. But findings such as these also support the frequent uphill battle for those who study depression, or suffer from it, namely convincing others that this is a real biological disorder, rather than some sort of failure of fortitude or spirit.
16 notes · View notes
robertsapolskyinsights · 10 years ago
Text
On stress... by Sapolsky
his month, we feature videos of a Greater Good presentation by Robert M. Sapolsky, one the country’s foremost experts on stress. In this excerpt from his talk, the best-selling author and Stanford University professor explains the difference between bad stress and good stress, and how we can manage the effects of chronic stress on our lives. In 1900, what do you think were the leading causes of death in this country? If you were 20 to 40 years old and a woman, the single riskiest thing you could do was try to give birth. TB, Pneumonia, influenza killed a lot of other people. But few people under the age of 100 die of the flu anymore. Relatively few women die in childbirth. Instead, we die of these utterly bizarre diseases that have never existed before on the planet in any sort of numbers—diseases like heart disease, cancer, adult-onset diabetes, and Alzheimer’s. Now, some of this has to do with nuts and bolts biology. But some of it has to do with issues that nobody ever had to think about before in medicine—totally bizarre questions like, “What’s your psychological makeup?” or “What’s your social status?” or “How do people with your social status get treated in your society?” And this one: “Why is it that when we’re feeling unloved, we eat more starch?” Figure that out, and you’ve cured half the cases of diabetes in this country Indeed, when you look at the diseases that do us in, they are predominantly diseases that can be caused, or made worse, by stress. As a result, most of us in this room will have the profound Westernized luxury of dropping dead someday of a stress-related disease. That’s why it’s so urgent that we understand stress—and how to better manage it. How stress kills Do you remember “homeostasis,” a term I guarantee you heard in ninth grade biology? Homeostasis is having an ideal body temperature, an ideal level of glucose in the bloodstream, an ideal everything. That’s being in homeostatic balance. A stressor is anything in the outside world that knocks you out of homeostatic balance. If you’re some zebra and a lion has ripped your stomach open and your innards are dragging in the dust and you still need to get out of there—well, that counts as being out of homeostatic balance. So to reestablish that balance, you secrete adrenaline and other hormones. You mobilize energy and you deliver it where it’s needed, you shut off the inessentials like the sex drive and digestion, you enhance immune defenses, and you think more clearly. You’re facing a short-term physical crisis, and the stress response is what you do with your body. For 99 percent of the species on this planet, stress is three minutes of screaming terror in the savannah, after which either it’s over with or you’re over with. That’s all you need to know about the subject if you’re a zebra or a lion. If you’re a human, though, you’ve got to expand the definition of a stressor in a very critical way. If you’re running from a lion, your blood pressure is 180 over 120. But you’re not suffering from high blood pressure—you’re saving your life. Have this same thing happen when you’re stuck in traffic, and you’re not saving your life. Instead you are suffering from stress-induced hypertension. We humans turn on the stress response with memories, with emotions, with thoughts, and the whole punch line is: That’s not what it evolved for. Do it regularly enough, and you’re going to damage your cardiovascular system. Increased blood flow hammers on the walls of your blood vessels, causing inflammation. Fat and glucose and cholesterol glom on and begin to clog your arteries. That’s bad news. You are more at risk for chronic fatigue, sleep disruption, muscle atrophy, and probably most importantly, adult-onset diabetes, this once obscure disease that’s just on the edge of being the number one killer in this country. Chronic stress also does bad things to the nervous system. Stress kills neurons in the part of the brain called the hippocampus and weakens the cables between neurons, so they can’t talk to each other. This impairs the formation and retrieval of long-term memory. The opposite thing happens in the amygdala, which is where we see fear in a brain scanner. In the hippocampus, stress causes stuff to shrivel up. But stress feeds the amygdala. It actually gets bigger. Chronic stress creates a hyper-reactive, hysterical amygdala, and this tells us tons about what stress has to do with anxiety disorders. Another domain: the mesolimbic dopamine system. Dopamine is a neurotransmitter that is about reward and pleasure. Cocaine works on the dopamine system. All the euphorians do. What are the effects of chronic stress on this part of the brain? Those pathways get depleted of dopamine, and this takes away your ability to feel pleasure. So if stress depletes your dopamine, what have you just set yourself up for? Major depression. What about the frontal cortex? It’s the most human part of the brain; we’ve proportionally got more of it than any other species does. And what does the frontal cortex do? It does gratification postponement, self-discipline, long-term planning, emotional regulation. It’s the last part of the brain to fully mature—that doesn’t happen until you’re 25 years old, which explains a lot about the freshmen year of college. This has a very interesting implication. If this is the last part of the brain to fully develop, by definition, then, it is the part of the brain least constrained by genes and most sculpted by experience. What does chronic stress do to the frontal cortex? Atrophy of neurons, disconnecting circuits. As a result, you make the most idiotic decisions, which are going to haunt you for the rest of your life, and yet you think they’re brilliant at the time. That’s another effect of chronic stress: Your judgment goes down the tubes. How to manage stress We’ve just gone on a quick tour of all the things that can go wrong from chronic stress. If you study the subject for a living, it’s amazing to you that anybody is still alive, that we haven’t just collapsed into puddles of stress-related disease. Despite that, most of us do decent jobs at coping, and a subset of us is spectacular at coping. And thus from day one, stress researchers have wondered why some bodies and some psyches deal better with stress than others. In making sense of individual differences, what we’re essentially asking is, “What is it that makes psychological stress stressful”? And a huge elegant literature by now has shown precisely what the building blocks are. The literature is built on experiments like this one: You have a lab rat in a cage, and every now and then, you give it a shock. Nothing major, but nonetheless, the rat’s blood pressure goes up and so do stress hormone levels. Up goes the risk of an ulcer. You are giving this rat a stress-related disease. Now, in the second cage, there’s another rat. Every time the first rat gets a shock, so does the second. Same intensity, same duration, both of their bodies are being thrown out of homeostatic balance to exactly the same extent. But there’s a critical difference: Every time the second rat gets a shock, it can go over to the other side of its cage, where there’s another rat that it can bite the crap out of. And you know what? This guy’s not going to get an ulcer, because he has an outlet for his frustrations. He has a hobby. There are other stress experiments that involve torturing rats, which suggest ways for humans to manage stress. We can give the rat a warning 10 seconds before each shock, and we find it doesn’t get an ulcer. That tells us that you are less vulnerable to a stress-related disease if you get predictive information. Another experiment: If we give the rat a lever to press, and that rat thinks he’s in control of the shocks, that helps—a sense of control decreases the stress response. More on Stress Take this quiz to learn how stressed you are. Watch Dr. Sapolsky's presentation on stress and how to overcome it. Discover Dr. Sapolsky's books Why Zebras Don't Get Ulcers and Monkeyluv. Check out this essay on men, stress, and sex, which draws on Sapolsky's research. Christine Carter provides tips on "How Not to Have a Nervous Breakdown." Are you an educator? You might be interested in "Stopping Teacher Burnout" and "A Training to Make Teachers Less Stressed." Yet another experiment tells us it helps to have friends: If a rat getting shocks has a friend it likes in the cage, and they are able to groom each other, the rat doesn’t get the ulcer. So social affiliation helps control stress. In short, you are more likely to get a stress response—more likely to subjectively feel stressed, more likely to get a stress-related disease—if you feel like you have no outlets for what’s going on, no control, no predictability, you interpret things as getting worse, and if you have nobody’s shoulder to cry on. Okay, these are very powerful observations. They’re helpful. But please don’t assume that if you get as much control in your life and as much predictive information in your life as possible, you will be protected from stress. To understand why, let me share some of the subtleties of this field. Look at the rat that got a warning. Timing is everything. He didn’t get an ulcer when he got a 10-second warning. But if the warning light goes on one second before the shock occurs, it has no positive effect whatsoever, because there isn’t time for the rat to adjust anything. Or suppose, instead, the warning light comes two minutes before. That will make the ulcers worse, because the rat is sitting there, ulcerating away, thinking, “Here it comes, here it comes, here it comes.” When it comes to predictive information, there’s only a narrow window where it works. When does a sense of control work? When you’re dealing with a mild to moderate stressor, because in those circumstances you know how much worse it could have been and can imagine, rightly or wrongly, that you had control over that improvement. But if it’s a major disastrous stressor, the last thing you want is an inflated sense of control, because that sets you up to think that the disaster is all your fault. In the case of a major disaster, we tend to minimize people’s sense of control—by saying, for example, “It wouldn’t have mattered if you had gotten him to the doctor a month ago, it wouldn’t have made a difference.” And one of the worst things we do, societally, is attribute more control to victims: “Well, what’s she going to expect if she dresses that way?” or “Well, what are they going to expect if they choose not to assimilate?” In short, a sense of control is protective for mild to moderate stressors, but it’s a disaster for major ones. In that domain, the most humane thing you can do is foster denial and rationalization rather than a sense of responsibility. When is stress good? Just as not all stress management techniques work, not all stress is bad. In fact, we love stress. We pay good money for it in a scary movie or on a roller coaster ride. We love stress when it’s the right amount of stress. When is it optimal? When it’s only moderately stressful, at the most. And good stress is transient—it’s not for nothing that you don’t have roller coaster rides going for three weeks! The stress also has to be happening in a context that feels safe overall. Moderately stressful at most, transient, safe—what does that define? John Goode     That defines stimulation. That defines what play is. What is play about? It’s when a higher rank dog says to a lower ranking dog: “I am willing to suspend our dominance relations right now and allow all sorts of unpredictable interactions. To show how much I’m doing that, I’m going to give you access to my throat or my genitals, and we’re just having a great time here playing.” In play, you feel safe, and as a result, you are willing to give up some control and predictability. We say, “Surprise me!” That’s good stress. There’s another lesson we can learn from dogs and other hierarchical mammals, like baboons: Social rank can cause stress, especially where rankings are unstable and people are jockeying for position. But social rank is not as important as social context. What patterns of social affiliation do you have? How often do you groom, how often does somebody groom you? How often do you sit in contact and play with kids? What’s clear by now is if you have a choice between being a high-ranking baboon or a socially affiliated one, the latter is definitely the one that is going to lead to a healthier, longer life. That’s the baboon we want to be—not the one with power, but the one with friends, neighbors, and family. Tracker Pixel for Entry
7 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
The spiritual is a particular state of the organism, a delicate combination of certain body configurations and certain mental configurations. Sustaining such states depends on a wealth of thoughts about the condition of the self and the condition of other selves, about past and future, about both concrete and abstract conceptions of nature.
Antonio Damasio, Looking for Spinoza, p 286 (via swarov)
34 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
“In the quest to understand human behavior, many have tried to overlook emotion, but to no avail. Behavior and mind, conscious and not, and the brain that generates them, refuse to yield their secrets unless emotion (and the many phenomena that hide under its name) is factored in and given its due.”
Antonio Damasio (via synapticuniverse)
34 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
That is the beauty of how emotion has functioned throughout evolution: it allows the possibility of making living beings act smartly without having to think smartly.
Antonio Damasio, Descartes’ Error (via sanguineheart)
55 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
Life is carried out inside a boundary that defines a body. Life and the life urge exist inside a boundary, the selectively permeable wall that separates the internal environment from the external environment. The idea of the organism revolves around the existence of that boundary…If there is no boundary, there is no body, and if there is no body, there is no organism.
Antonio Damasio, The Feeling of What Happens: Body and Emotion in the Making of Consciousness (via tiredshoes)
22 notes · View notes
robertsapolskyinsights · 10 years ago
Quote
When you have an emotion you are recruiting a variety of mechanisms that came in the long history of evolution, long before emotions arose, and those mechanisms all had to do with how an organism manages its life.
Antonio Damasio (via intellectuss)
34 notes · View notes
robertsapolskyinsights · 10 years ago
Link
Antonio Damasio, M.D., is a professor of neuroscience and the director of the Brain and Creativity Institute at the University of Southern California. He is a pioneer in the field of cognitive neuroscience and a highly cited researcher. He has received numerous awards for his contributions to the understanding of emotions, feelings and decision-making, and he has described his discoveries in several books. - Walking the halls here at the Brain and Creativity Institute, I see art works from your personal collection, and downstairs there is a theater that is also used as a recording studio. How are you furthering the understanding of the connection between the brain and the arts? - As you come through the lobby, if you turn right, you go toward a laboratory of electrophysiology and a state-of-the-art 3-D MR brain scanner. If you turn left, you go into a small, state-of-the-art auditorium. Its acoustics were designed by Yasuhisa Toyota, who is responsible for the sound of some of the greatest music halls around the world from Tokyo to Hamburg, including the Walt Disney Concert Hall here in LA, a landmark collaboration with Frank Gehry. What we wanted when we created this complex is to literally force people to say, “What an odd combination. Why?” So here is the answer. On the one hand, we have the most modern form of inquiring into the brain-making mind, and, on the other, we have the oldest. Because when people were beginning to do theater, music and recitations of poetry, say, in an arena in Greece, they were in fact inquiring about the human mind in very probing ways. Great culture — philosophy, theater, music — gave us some of the most remarkable first entries into the human mind. We wanted to have these two approaches together to force those who work here as well as visitors to see that they’re not that different — that the mission we pursue now is not that different from the mission that Sophocles or Aristotle pursued. We need to bridge the two approaches and keep respecting the achievements of the past. The idea that by just doing neuroscience or advanced cognitive science, one can understand everything about the human mind is ridiculous. We need to bring past efforts in the arts and the humanities into the mix and also use the current contributions of artists and philosophers to understand this most complicated process that is the human mind.
92 notes · View notes
robertsapolskyinsights · 10 years ago
Photo
Tumblr media
The Gut-Brain Connection
Have you ever had a “gut-wrenching” experience? Do certain situations make you “feel nauseous”? Have you ever felt “butterflies” in your stomach? We use these expressions for a reason. The gastrointestinal tract is sensitive to emotion. Anger, anxiety, sadness, elation — all of these feelings (and others) can trigger symptoms in the gut.
The brain has a direct effect on the stomach. For example, the very thought of eating can release the stomach’s juices before food gets there. This connection goes both ways. A troubled intestine can send signals to the brain, just as a troubled brain can send signals to the gut. Therefore, a person’s stomach or intestinal distress can be the cause or the product of anxiety, stress, or depression. That’s because the brain and the gastrointestinal (GI) system are intimately connected — so intimately that they should be viewed as one system.
This is especially true in cases where a person experiences gastrointestinal upset with no obvious physical cause. For such functional GI disorders, it is difficult to try to heal a distressed gut without considering the role of stress and emotion..
Stress and the functional GI disorders
Given how closely the gut and brain interact, it becomes easier to understand why you might feel nauseated before giving a presentation, or feel intestinal pain during times of stress. That doesn’t mean, however, that functional gastrointestinal illnesses are imagined or “all in your head.” Psychology combines with physical factors to cause pain and other bowel symptoms. Psychosocial factors influence the actual physiology of the gut, as well as symptoms. In other words, stress (or depression or other psychological factors) can affect movement and contractions of the GI tract, cause inflammation, or make you more susceptible to infection.
In addition, research suggests that some people with functional GI disorders perceive pain more acutely than other people do because their brains do not properly regulate pain signals from the GI tract. Stress can make the existing pain seem even worse.
Based on these observations, you might expect that at least some patients with functional GI conditions might improve with therapy to reduce stress or treat anxiety or depression. And sure enough, a review of 13 studies showed that patients who tried psychologically based approaches had greater improvement in their digestive symptoms compared with patients who received conventional medical treatment.
Is stress causing your symptoms?
Are your stomach problems — such as heartburn, abdominal cramps, or loose stools — related to stress? Watch for these other common symptoms of stress and discuss them with your doctor. Together you can come up with strategies to help you deal with the stressors in your life, and also ease your digestive discomforts.
Physical symptoms
Stiff or tense muscles, especially in the neck and shoulders, Headaches, Sleep problems, Shakiness or tremors, Recent loss of interest in sex, Weight loss or gain, Restlessness
Behavioral symptoms
Procrastination, Grinding teeth, Difficulty completing work assignments, Changes in the amount of alcohol or food you consume, Taking up smoking, or smoking more than usual, Increased desire to be with or withdraw from others, Rumination (frequent talking or brooding about stressful situations)
Emotional symptoms
Crying, Overwhelming sense of tension or pressure, Trouble relaxing, Nervousness, Quick temper, Depression, Poor concentration, Trouble remembering things, Loss of sense of humor, Indecisiveness
2K notes · View notes
robertsapolskyinsights · 10 years ago
Text
on Free will by Robert Sapolsky
The Danger Of Inadvertently Praising Zygomatic Arches
I don't think that there is Free will. The conclusion first hit me in some sort of primordial ooze of insight when I was about 13-years old, and that conclusion has only become stronger since then. What worries me is that despite the fact that I think this without hesitation, there are times that it is simply too hard to feel as if there is no free will, to believe that, to act accordingly. What really worries me is that it is so hard for virtually anyone to truly act as if there is no free will. And that this can have some pretty bad consequences.
If you're a neuroscientist, you might be able think there's free will if you spend your time solely thinking about, say, the kinetics of one enzyme in the brain, or the structure of an ion channel, or how some molecule is transported down an axon. But if you devote your time instead to thinking about what the brain, hormones, genes, evolution, childhood, fetal environment, and so on, have to do with behavior, as I do, it seems simply impossible to think that there is free will.
The evidence is broad and varied. Raising the levels of testosterone in someone makes him more likely to interpret an emotionally ambiguous face as a threatening one (and perhaps act accordingly). Having a mutation in a particular gene increases the odds that she will be sexually disinhibited in middle age. Spending fetal life in a particularly stressful prenatal environment increases the likelihood of overeating as an adult. Transiently inactivating a region of the frontal cortex in someone and she acts more cold-hearted and utilitarian when making decisions in an economics game. Being a psychiatrically healthy first-order relative of a schizophrenic increases the odds of believing in "metamagical" things like UFOs, extrasensory perception, or literalist interpretations of the Bible. Having a normal variant of the gene for the vasopressin receptor makes a guy more likely to have stable romantic relationships. The list goes on and on (and just to make a point that should be obvious from this paragraph, but which still can't be emphasized too frequently, lack of free will doesn't remotely equal anything about genetic determinism).
The free will concept requires one to subscribe to the idea that despite there being a swirl of biological yuck and squishy brain parts filled with genes and hormones and neurotransmitters, nonetheless, there's an underground bunker in a secluded corner of the brain, a command center containing a little homunculus who chooses your behavior. In that view, the homunculus might be made of nanochips or ancient, dusty vacuum tubes, of old crinkly parchment, stalactites of your mother's admonishing voice, of streaks of brimstone, or rivets made out of gumption. And, in this view of behavior, whatever the homunculus is made of, it ain't made of something biological. But there is no homunculus and no free will.
This is the only conclusion that I can reach. But still, it is so hard to really believe that, to feel that. I am willing to admit that I have acted egregiously at times as a result of that limitation. My wife and I get together with a friend for brunch who serves some fruit salad. We proclaim, Wow, the pineapple is delicious. They're out of season, our host smugly responds, but I lucked out and was able to find a couple of good ones. And in response to this, the faces of my wife and I communicate awestruck worship—you really know how to pick fruit, you are a better person than we are. We are praising the host for this display of free will, for the choice made at the split in the road that is Pineapple Choosing. But we're wrong. Genes have something to do with the olfactory receptors our host has that help out in detecting ripeness. Maybe our host comes from a people whose deep and ancient cultural values include learning how to feel up a pineapple to tell if it's good. The sheer luck of the socioeconomic trajectory of our host's life has provided the resources to prowl around an overpriced organic market that plays Peruvian folk Muzak.
It is so hard to truly feel as if there is no free will, to not fall for this falsehood of accepting that there is a biological substrate of potentials and constraints, but that there is homunculus-ish separation in what the person has done with that substrate—"Well it's not the person's fault if nature has given them a face that isn't the loveliest, but after all, whose brain is it that chose to get that hideous nose ring?"
This transcends mere talk of nose rings and pineapples. As a father, I am immersed in the community of neurotic parents frantically trying to point our children in the direction of the most perfect adulthoods imaginable. When considering our kids' schooling, there is a body of wonderful research by a colleague of mine, Carol Dweck that we always cite. To wildly summarize and simplify, take a child who has just done something laudable academically, and indeed laud her, saying, Wow, that's great, you must be so smart. Alternatively, in the same circumstance, praise her instead with, Wow, that's great, you must have worked so hard. And saying the latter is a better route for improving academic performance in the future – don't praise the child's intrinsic intellectual gifts; praise the effort and discipline they chose to put into the task.
Well, what's wrong with that? Nothing if that research simply produces value-free prescription— "'You must have worked so hard' is a more effective approach for enhancing academic performance than 'You're so smart.'" But it is wrong if you are patting the homunculus on the head, concluding that a child who has achieved something through effort is a better, more praiseworthy producer of choice than a child running on plain raw smarts. That is because free will even falls by the wayside when considering self-discipline, executive function, emotional regulation and gratification postponement. For example, damage to the frontal cortex, the brain region most intimately involved in those functions, produces someone who knows the difference between right and wrong yet still can't control their behavior, even their murderous behavior. Different versions of a subtype of dopamine receptor influences how risk taking and sensation seeking a person is. If someone is infected with the protozoan parasite Toxoplasma gondii, they are likely to become subtly more impulsive. There's a class of stress hormones that can atrophy neurons in the frontal cortex; by early elementary school, a child raised amid the duress of poverty tends to lag behind in the maturation of the frontal cortex
Maybe we can get to the point of truly realizing that when we say, "What beautiful cheekbones you have," we are congratulating the person based on the unstated belief that they chose the shape of their zygomatic arches. But it's not that big of a problem if we can't achieve that mindset. But it is a big one if when, say, considering that six-year old whose frontocortical development has been hammered by early life stress, we mistake his crummy impulse control for lack of some moral virtue. Or to do the same in any other realm of the foibles and failures, even the monstrosities of human behavior. This is extremely relevant to the world of the criminal justice system. And to anyone who would say that it is dehumanizing to claim that criminal behavior is the end product of a broken biological machine, the answer must be that it is a hell of a lot better than damning the behavior as the end product of a rotten soul. And it is equally not a great thing to think in terms of praise, of good character, of good choice, when looking at the end products of lucky, salutary biology.
But it is so difficult to really believe that there is no free will, when so many of the threads of causality are not yet known, or are as intellectually inaccessible as having to automatically think about the behavioral consequences of everything from the selective pressures of hominid evolution to what someone had for breakfast. This difficulty is something that we should all worry about.
3 notes · View notes