Psychology

This is what happened to fathers’ hormone levels when they watched their kids play football

GettyImages-90647584.jpgBy Christian Jarrett

The effect of playing sport on men’s testosterone levels is well documented. Generally speaking, the winner enjoys a testosterone boost, while the loser experiences the opposite (though far less studied, competition unsurprisingly also affects women’s hormonal levels, though not in the same ways as men’s). The evolutionary-based explanation for the hormonal effects seen in men is that the winner’s testosterone rise acts to increase their aggression and the likelihood that they will seek out more contests, while the loser skulks off to lick their wounds. When it comes to vicarious effects of competition on men’s testosterone, however, the findings are more mixed. There’s some evidence that male sports fans show testosterone gains after seeing their teams win, but other studies have failed to replicate this finding.

A new, small study in Human Nature adds to this literature by examining the hormonal changes (testosterone and cortisol) in fathers watching their children play a football game – a situation in which you might particularly expect to see vicarious hormonal effects since it’s the men’s own kin who are involved.

The eighteen participating fathers (average age 47) were recruited in the US state of New Mexico where they were watching their kids (average age 13) play in a local football (soccer) tournament. Nine of them were watching their sons play, the others were watching their daughters. The dads provided saliva samples before and after the matches, and also answered some questions about their child and the game.

Anthropologist Louis Alvarado at the University at Albany and his colleagues, including the psychologists Melissa Eaton and Melissa Thompson at the University of New Mexico, found that the fathers’ testosterone and cortisol levels increased after the experience of watching the games (by 81 per cent and 417 per cent, respectively). These changes weren’t linked to the outcomes of the games, but were to an extent explained by whether or not the fathers believed that the referee had acted unfairly towards their child’s team – if they did perceive unfairness, the fathers’ post-match cortisol and testosterone tended to be higher (fathers with higher pre-match testosterone were also more likely to perceive unfairness).

The researchers said this main result of a link between hormonal changes and fairness  perception was “consistent with a functional explanation in which hormonal changes are associated with the potential for future conflict – here, in the context of responding to potential threats affecting one’s own status and that of kin.”

Given that aggression among parents watching their kids has become “an important cultural issue”, the researchers added that their results could “… have implications for the growing body of literature that attempts to curb the problem of sideline violence by identifying the proximate and individualistic factors associated with conflict potential.”

Other findings to come out of the study were that fathers watching their sons showed greater testosterone rises than fathers watching their daughters, as did fathers who felt sports were less important to their child (this latter result was opposite to expectations, and the researchers speculated that it was perhaps connected to the fathers’ frustration or disappointment that their child was not taking the competition seriously enough).

A more technical finding was that gains in the fathers’ “stress hormone” cortisol tended to predict subsequent increases in their testosterone. This result provides tentative support for the so-called “positive coregulation” model of cortisol and testosterone, in which increases in cortisol supplement the effects of testosterone when males are competing, while arguing against the opposite theory that sees cortisol as down-regulating testosterone and reducing the likelihood of the individual engaging in competitive behaviour in times of stress. The researchers said the “positive coregulation” model makes more sense in evolutionary terms, with stress (and cortisol) priming high-ranking male primates to be more competitive when they are faced with the threat of status competition from more junior males.

Steroid Hormone Reactivity in Fathers Watching Their Children Compete

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/k3JWInB7S3k/

Most of us have some insight into our personality traits, but how self-aware are we in the moment?

Screenshot 2018-10-01 13.07.33.png
Correlations between momentary self-views and observed behaviour, from Sun and Vazire, 2018.

By guest blogger Jesse Singal

Your ability to accurately understand your own thoughts and behaviour in a given moment can have rather profound consequences. If you don’t realise you’re growing loud and domineering during a heated company meeting, that could affect your standing at work. If you react in an oversensitive manner to a fair and measured criticism levelled at you by your romantic partner, it could spark a fight.

It’s no wonder, then, that psychology researchers are interested in the question of how well people understand how they are acting and feeling in a given moment, a concept known as state self-knowledge (not to be confused with its better-studied cousin trait self-knowledge, or individuals’ ability to accurately gauge their own personality characteristics that are relatively stable over time).

In a new study available as a preprint on PsyArXiv, Jessie Sun and Simine Vazire of the University of California, Davis adopted a novel, data-heavy approach to gauging individuals’ levels of personality state self-knowledge (i.e. their personality as it manifested in the moment), and it revealed some interesting findings about the ways in which people are – and aren’t – able to accurately understand their own fleeting psychological states.

The study, provisionally titled “Do People Know What They’re Like in the Moment?” had two main components. First, 434 Washington University of St. Louis students were texted four times a day for 15 days and asked to rate themselves on four of the Big Five personality characteristics based on how they had felt and behaved during the previous hour: Extraversion, Agreeableness (only “if they reported that they were around others during the target hour”), Conscientiousness, and Neuroticism. Of these 434 participants, 311 also wore a recording device paired with an iPod touch that recorded for 30 seconds every nine and a half minutes from 7 a.m. to 2 a.m. every day, generating a huge amount of audio data. (Before researchers had full access to the recordings, students were allowed to listen to them and erase anything they didn’t want the researchers to hear, but only 99 files were deleted from a cache that became “152,592 usable recordings from 304 participants.”)

Second, a veritable small army of research assistants – more than a hundred – listened to the recordings and rated the speakers on the same four personality states they had previously rated themselves on. For a subset of the study participants, then, researchers had three useful pieces of information: recordings of them going about their lives, participants’ rating of their own personality states during those periods, and outside observers’ rating of those same states. This allowed the researchers to measure the extent to which self-ratings correlated with other-ratings – that is, did Tom’s view that he was quite extroverted during a given hour match up with how others who heard him on audio interpreted his behaviour during snippets of that period?

And measure they did, generating a pretty cool series of graphs (see above). The more acute the positive, upward slope, the more there was agreement between self- and other-ratings. So as you can see, Extraversion was, by a significant margin, the personality characteristic for which people seemed to have the most accurate self-knowledge. This shouldn’t necessarily be a surprise. For one thing, while intuition isn’t always an accurate guide on such matters, common sense would suggest that people are well aware of the extent to which they are actively and enthusiastically engaging in social activity, and that we’re all pretty good at judging others’ level of extraversion as well. Second, the authors note that this finding is “consistent with a large body of literature demonstrating high self-observer agreement on trait extraversion across a wide range of conditions.” The state with second-highest subject-observer agreement, as the graph shows, was Conscientiousness (again, perhaps because in-the-moment conscientious behaviour is pretty easy for both the self and others to discern).

What about the two other personality states, where there was significantly less subject-observer agreement? The tricky part about interpreting these findings, as the authors point out, is that there are two possible explanations: the first is that the subject really does lack insight into their temporary psychological states and that the external observers’ observations accurately captured this; and the second is that the observer was wrong because they only had access to a limited slice of audio that simply might not be enough to accurately gauge the subject’s state at that moment (remember, the raters had no visual information to go on – no body language, facial expressions, or anything else). 

So when it comes to Agreeableness and that rather flat line – meaning little agreement between subjects and observers – the authors argue that “it is plausible that people have less self-insight into their momentary agreeableness,” because Agreeableness has so much more to do with external, observable behaviours, and with other people’s perceptions of your warmth, than with internal “thoughts and feelings” (meaning that other people might naturally be better judges of this personality state). Neuroticism, on the other hand, is different – it’s a state much more characterised by internal feelings than by outward behaviour. So in that case, Sun and Vazire argue that their findings alone shouldn’t be seen as supporting the idea that people are bad at self-rating their present level of Neuroticism – rather, it’s more likely the audio just didn’t give the observers enough to go on.

As is probably clear, this is a complicated topic, and it seems likely that people are much better at understanding their present personality states in some ways than others. Sun and Vazire’s study was quite ambitious, and it offers a useful path forward for researchers hoping to learn more about an important issue. In the meantime, their general takeaway? “Our findings show that we can probably trust what people say about their momentary levels of extraversion, conscientiousness, and likely neuroticism. However, our findings also call into question people’s awareness of when they are being considerate versus rude.” Useful information – and probably not a surprise to anyone who has dealt with a bullying coworker who doesn’t seem to understand the impression he’s making on his colleagues.

Do People Know What They’re Like in the Moment? [This paper is a preprint and the final peer-reviewed version may differ from the version that this report was based on]

Post written by Jesse Singal (@JesseSingal) for the BPS Research Digest. Jesse is a contributing writer at New York Magazine. He is working on a book about why shoddy behavioral-science claims sometimes go viral, for Farrar, Straus and Giroux.

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/dFvjWfqr85s/

A brief jog sharpens the mind, boosting attentional control and perceptual speed. Now researchers are figuring out why

giphyBy Christian Jarrett

If you wanted to ensure your mind was in top gear, which do you think would provide the better preparation – 15 minutes of calm relaxation, or a 15 minute jog?

A study involving 101 undergrad students suggests you’d be better off plumping for the latter.

Evidence had already accumulated showing that relatively brief, moderate aerobic exercise – like going for a brisk walk or a jog – has immediate benefits for mental functioning, especially speed and attentional control. A parallel literature has also documented how brief, aerobic exercise has beneficial effects on your mood, including making you feel more energetic, even when you don’t expect it to. In their new paper in Acta Psychologica, Fabian Legrand and his colleagues bridged these findings by looking to see if the emotional effects of exercise might be at least partly responsible for the cognitive benefits.

They asked their participants to rate how energetic and vigorous they were feeling and then to complete two cognitive tests (versions of the Trail Making Test, which involves drawing lines between numbers and letters as fast and as accurately as possible).

Next, they allocated half their student participants to go for a 15 minute group jog around the campus and the others to spend the same time following group relaxation exercises. Finally, two minutes after the jog/relaxation session, the students answered the same questions as before about their feelings of energy, and then they repeated the cognitive tests.

The students who went for a jog, but not the relaxation students, subsequently showed significant improvement on the version of the Trail Making Test that measures mental speed and attentional control (but not the other that taps memory and cognitive switching). Moreover, this improvement in cognition was fully mediated by their increased feelings of energy and vigour, implying – although not proving conclusively – that the jog boosted cognition through its effects on their subjective sense of having more energy (in contrast, the relaxation group actually felt dramatically less energetic).

Among the study limitations was the fact the relaxation session took place inside, while the jog was outside. Notwithstanding this issue and some others, and while recognising the need for more research, Legrand and his team said that their findings “add weight to recent suggestions that increased feelings of energy may mediate the relationship between aerobic exercise and some aspects of cognitive functioning.”

Brief aerobic exercise immediately enhances visual attentional control and perceptual speed. Testing the mediating role of feelings of energy

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/R50jRjfOh1M/

“My-side bias” makes it difficult for us to see the logic in arguments we disagree with

Screenshot 2018-10-09 09.28.52By Christian Jarrett

In what feels like an increasingly polarised world, trying to convince the “other side” to see things differently often feels futile. Psychology has done a great job outlining some of the reasons why, including showing that, regardless of political leanings, most people are highly motivated to protect their existing views.

However a problem with some of this research is that it is very difficult to concoct opposing real-life arguments of equal validity, so as to make a fair comparison of people’s treatment of arguments they agree and disagree with.

To get around this problem, an elegant new paper in the Journal of Cognitive Psychology has tested people’s ability to assess the logic of formal arguments (syllogisms) structured in the exact same way, but that featured wording that either confirmed or contradicted their existing views on abortion. The results provide a striking demonstration of how our powers of reasoning are corrupted by our prior attitudes.

Vladimíra Čavojová at the Slovak Academy of Sciences and her colleagues recruited 387 participants in Slovakia and Poland, mostly university students. The researchers first assessed the students’ views on abortion (a highly topical and contentious issue in both countries), then they presented them with 36 syllogisms – these are formal logical arguments that come in the form of three statements (see examples, below).

Screenshot 2018-10-09 09.31.06.png

The participants’ challenge was to determine whether the third statement of each syllogism followed logically from the first two, always assuming that those initial two premises were true. This was a test of pure logical reasoning – to succeed at the task, one only needs to assess the logic, putting aside one’s prior knowledge or beliefs (to reinforce that this was a test of logic, the participants were instructed to always treat the first two premises of each syllogism as true).

Crucially, while some of the syllogisms were neutral, others featured a final statement germane to the abortion debate, either on the side of pro-life or pro-choice (but remember this was irrelevant to the logical consistency of the syllogisms).

Čavojová and her team found that the participants’ existing attitudes to abortion interfered with their powers of logical reasoning – the size of this effect was modest but statistically significant.

Mainly the participants had trouble accepting as logical those valid syllogisms that contradicted their existing beliefs, and similarly they found it difficult to reject as illogical those invalid syllogisms that conformed with their beliefs. This seemed to be particularly the case for participants with more pro-life attitudes. What’s more, this “my-side bias” was actually greater among participants with prior experience or training in logic (the researchers aren’t sure why, but perhaps prior training in logic gave participants even greater confidence to accept syllogisms that supported their current views – whatever the reason, it shows again what a challenge it is for people to think objectively).

“Our results show why debates about controversial issues often seem so futile,” the researchers said. “Our values can blind us to acknowledging the same logic in our opponent’s arguments if the values underlying these arguments offend our own.”

This is just the latest study that illustrates the difficulty we have in assessing evidence and arguments objectively. Related research that we’ve covered recently has also shown that: our brains treat opinions we agree with as facts; that many of us over-estimate our knowledge; how we’re biased to see our own theories as accurate; and that when the facts appear to contradict our beliefs, well then we turn to unfalsifiable arguments. These findings and others show that thinking objectively does not come easily to most people.

My point is valid, yours is not: myside bias in reasoning about abortion

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/CWeNQH_D8r4/

Similarity in shame and its repercussions across 15 world cultures points to the emotion’s survival function

Screenshot 2018-10-10 09.23.17.png
The 15 sites the researchers visited to study shame, from Sznycer et al 2018

By Emma Young

Shame feels so awful it’s hard to see how it could have an upside, especially when you consider specific triggers of the emotion – such as body-shaming, which involves criticising someone for how their body looks. But is shame always an ugly emotion that we should try to do away with? Or can it be helpful? 

The answer, according to a new study published in PNAS of 899 people from all over the world is that, as an emotion, shame can not only be useful but is fundamental to our ability to survive and thrive in a group. The essential job of shame, it seems, is to stop us from being too selfish for our own good. 

Daniel Sznycer at the University of Montreal, Canada, and his colleagues interviewed people living in 15 very different small-scale societies, including in the Andes in Ecuador, a remote region of Siberia, and the Indian Ocean island of Mauritius. 

The researchers asked one group from each society for their thoughts on 12 hypothetical situations involving a person of the same sex as them, including how much shame this person should feel if he or she was ugly, or lazy, or stole from someone in the community, for example. Participants were also asked to indicate, using a four-point scale, how negatively they would view this person as a result (thus providing an indication of how much that person would be “devalued” by others). The researchers also asked members of a fresh group of participants in each society to indicate, again on a four-point scale, how much shame they would themselves feel in the various hypothetical situations. 

Overall, the researchers found very close agreement between the degree of felt shame that participants estimated being associated with a given act or state and how much they indicated a person would be de-valued as a result of committing that act/ being in that state. This was particularly true within a society, but it also held across societies. “The fact that the same pattern is encountered in such mutually remote communities suggests that shame’s match to audience devaluation is a design feature crafted by [natural] selection, and not a product of cultural contact or convergent cultural evolution,” the researchers write.

Our ancestors lived in small, close-knit bands, and they depended on each other for survival. In bad times, especially, they had to rely on each other to pull through. Always being selfless wouldn’t have been wise, as the individual would likely have been exploited. But for someone always to act contrary to the group’s ideas of what mattered, and what was important (that all members should contribute to the tasks important for survival, for example), would have been a bad move, too, as they could have found themselves shunned or even exiled. 

To thrive, the researchers argue, a person would have had to accurately weigh the payoff of an act (taking food without telling others, or pretending to be sick instead of foraging or hunting, for instance) against the cost if they were found out. The results of the study suggest that shame evolved to help us to make the right decision – to act in our own long-term interests by not seriously jeopardising our place in our social group. Shame, then, functions like pain – as a warning not to repeat a behaviour that threatens our own wellbeing. 

This doesn’t mean, of course, that shame is always good. If your group has badly skewed ideas about what really matters – if it places a high value on what clothes you wear, or what your body looks like, for example – then shame is skewed too, into something that isn’t helpful, but harmful. 

Cross-cultural invariances in the architecture of shame

Emma Young (@EmmaELYoung) is Staff Writer at BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/Y29tZMW18TY/

There’s a fascinating psychological story behind why your favourite fictional baddies all have a truly evil laugh


By guest blogger David Robson

Towards the end of the Disney film Aladdin, our hero’s love rival, the evil Jafar, discovers Aladdin’s secret identity and steals his magic lamp. Jafar’s wish to become the world’s most powerful sorcerer is soon granted and he then uses his powers to banish Aladdin to the ends of the Earth. 

What follows next is a lingering, close-up of Jafar’s body. He leans forward, fists clenched, with an almost constipated look on his face. He then explodes in uncontrollable cackles that echo across the landscape. For many millennials growing up in the 1990s, it is an archetypical evil laugh.

Such overt displays of delight at others’ misfortune are found universally in kids’ films, and many adult thriller and horror films too. Just think of the rapturous guffaws of the alien in the first Predator film as it is about to self-detonate, taking Arnold Schwarzenegger with it. Or Jack Nicholson’s chilling snicker in The Shining. Or Wario’s manic crowing whenever Mario was defeated. 

A recent essay by Jens Kjeldgaard-Christiansen in the Journal of Popular Culture asks what the psychology behind this might be. Kjeldgaard-Christiansen is well placed to provide an answer having previously used evolutionary psychology to explain the behaviours of heroes and villains in fiction more generally.

In that work, he argued that one of the core traits a villain should show is a low “welfare trade-off” ratio: they are free-riders who cheat and steal, taking from their community while contributing nothing. Such behaviour is undesirable for societies today, but it would have been even more of a disaster in prehistory when the group’s very survival depended on everyone pulling their weight. As a result, Kjeldgaard-Christiansen argues we are wired to be particularly disgusted by cheating free-riders – to the point that we may even feel justified in removing them from the group, or even killing them.

However, there are degrees of villainy and the most dangerous and despised people are those who are not only free riders and cheats, but psychopathic sadists, who perform callous acts for sheer pleasure. Sure enough, previous studies have shown that it is people matching this description whom we consider to be truly evil (since there is no other way to excuse or explain their immorality) and therefore deserving of the harshest punishments. Crucially, Kjeldgaard-Christiansen argues that a wicked laugh offers one of the clearest signs that a villain harbours such evil, gaining “open and candid enjoyment” from others’ suffering – moreover, fiction writers know this intuitively, time and again using the malevolent cackle to identify their darkest characters. 

Part of the power of the evil laugh comes from its salience, Kjeldgaard-Christiansen says: it is both highly visual and vocal (as the close up of Jafar beautifully demonstrates) and the staccato rhythm can be particularly piercing. What’s more, laughs are hard to fake: a genuine, involuntary laugh relies on the rapid oscillation of the “intrinsic laryngeal muscles”, movements that appear to be difficult to produce by our own volition without sounding artificial. As a result, it’s generally a reliable social signal of someone’s reaction to an event, meaning that we fully trust what we are hearing. Unlike dialog – even the kind found in a children’s film – a sadistic or malevolent laugh leaves little room for ambiguity, so there can be little doubt about the villain’s true motives. 

Such laughs are also particularly chilling because they run counter to the usual pro-social function of laughter – the way it arises spontaneously during friendly chats, for example, serving to cement social bonds. 

There are practical reasons too for the ubiquity of the evil laugh in children’s animations and early video games, Kjeldgaard-Christiansen explains. The crude graphics of the first Super Mario or Kung Fu games for Nintendo, say, meant it was very hard to evoke an emotional response in the player – but equipping the villain with an evil laugh helped to create some kind of moral conflict between good and evil that motivated the player to don their cape and beat the bad guys. “This is the only communicative gesture afforded to these vaguely anthropomorphic, pixelated opponents, and it does the job,” he notes. 

There are limits to the utility of the evil laugh in story-telling, though. Kjeldgaard-Christiansen admits that its crude power would be destructive in more complex story-telling, since the display of pleasure at others’ expense would prevent viewers from looking for more subtle motivations or the role of context and circumstance in a character’s behaviour. But for stories dealing with black and white morality, such as those aimed at younger viewers who have not yet developed a nuanced understanding of the world, its potential to thrill is second to none.

Kjeldgaard-Christiansen’s article is certainly one of the most entertaining papers I have read in a long time [get open access here], and his psychological theories continue to be thought provoking. It would be fun to see more experimental research on this subject – comparing the acoustic properties of laughs, for instance, to find out which sounds the most evil. But in my mind, it will always be Jafar’s.

Social Signals and Antisocial Essences: The Function of Evil Laughter in Popular Culture

Post written by David Robson (@d_a_robson) for the BPS Research Digest. His first book, The Intelligence Trap, will be published by Hodder Stoughton (UK)/WW Norton (USA) in 2019.

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/MbGh35K94Xc/

Students’ mistaken beliefs about how much their peers typically study could be harming their exam performance in some surprising ways

GettyImages-882969886.jpgBy Christian Jarrett

A lot of us use what we consider normal behaviour – based on how we think most other people like us behave – to guide our own judgments and decisions. When these perceptions are wide of the mark (known as “pluralistic ignorance”), this can affect our behaviour in detrimental ways. The most famous example concerns students’ widespread overestimation of how much their peers drink alcohol, which influences them to drink more themselves.

Now a team led by Steven Buzinksi at the University of North Carolina at Chapel Hill has investigated whether students’ pluralistic ignorance about how much time their peers spend studying for exams could be having a harmful influence on how much time they devote to study themselves. Reporting their findings in Teaching in Psychology, the team did indeed find evidence of pluralistic ignorance about study behaviour, but it seemed to have some effects directly opposite to what they expected.

Across four studies with hundreds of social psych undergrads, the researchers found that, overall, students tended to underestimate how much time their peers spent studying for an upcoming exam (but there was a spread of perceptions, with some students overestimating the average). Moreover, students’ perceptions of the social norm for studying were correlated with their own study time, suggesting – though not proving – that their decisions about how much to study were influenced by what they felt was normal.

However, when Buzinksi and his colleagues looked to see whether the students’ misconceptions about their peers’ study time were associated with their subsequent exam performance, they found the opposite pattern to what they expected.

The researchers had thought that underestimating typical study time would be associated with choosing to study less, and in turn that this would be associated with poorer exam performance. Instead, they found that it was those students who overestimated their peers’ study time who performed worse in the subsequent exam, and this seemed to be fully explained by their feeling unprepared for the exam (the researchers speculated that such feelings could increase anxiety and self-doubt, thus harming exam performance).

In a final study, one week before an exam, the researchers corrected students’ misconceptions about the average exam study time and this had the hoped-for effect of correcting pluralistic ignorance about normal study behaviour; it also removed any links between beliefs about typical study time and feelings of unpreparedness.

Most promisingly, average exam performance was superior after this intervention, as compared with performance in a similar exam earlier in the semester, suggesting that correcting misconceptions about others’ study behaviour is beneficial (perhaps learning the truth about how much their peers studied gave the students a chance to adjust their own study behaviour, and this may have boosted the confidence of those who would otherwise have overestimated average study time. However this wasn’t tested in the study so remains speculative).

Of course another explanation for the improved performance could just have been due to practice effects through the semester, but it’s notable that such an improvement in the late-semester exam was not observed in earlier years when the study-time-beliefs intervention was not applied.

Future research will be needed to confirm the robustness of these findings, including in more diverse student groups, and to test the casual role of beliefs about study time and feelings of preparedness, for example by directly observing how correcting misconceptions affects students’ study behaviour and their confidence.

For now, Buzinksi and his colleagues recommend it could be beneficial to use class discussions “…to correct potentially detrimental misperceptions”. They added: “Unless we as educators actively intervene, our students will approach their coursework from an understanding based upon flawed perceptions of the classroom norm, and those most at risk may suffer the most from their shared ignorance.”

Insidious Assumptions
How Pluralistic Ignorance of Studying Behavior Relates to Exam Performance

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/434H6Zg3yp8/

Students’ mistaken beliefs about how much their peers study could be harming their exam performance

GettyImages-882969886.jpgBy Christian Jarrett

A lot of us use what we consider normal behaviour – based on how we think most other people like us behave – to guide our own judgments and decisions. When these perceptions are wide of the mark (known as “pluralistic ignorance”), this can affect our behaviour in detrimental ways. The most famous example concerns students’ widespread overestimation of how much their peers drink alcohol, which influences them to drink more themselves.

Now a team led by Steven Buzinksi at the University of North Carolina at Chapel Hill has investigated whether students’ pluralistic ignorance about how much time their peers spend studying for exams could be having a harmful influence on how much time they devote to study themselves. Reporting their findings in Teaching in Psychology, the team did indeed find evidence of pluralistic ignorance about study behaviour, but it seemed to have some effects directly opposite to what they expected.

Across four studies with hundreds of social psych undergrads, the researchers found that, overall, students tended to underestimate how much time their peers spent studying for an upcoming exam (but there was a spread of perceptions, with some students overestimating the average). Moreover, students’ perceptions of the social norm for studying were correlated with their own study time, suggesting – though not proving – that their decisions about how much to study were influenced by what they felt was normal.

However, when Buzinksi and his colleagues looked to see whether the students’ misconceptions about their peers’ study time were associated with their subsequent exam performance, they found the opposite pattern to what they expected.

The researchers had thought that underestimating typical study time would be associated with choosing to study less, and in turn that this would be associated with poorer exam performance. Instead, they found that it was those students who overestimated their peers’ study time who performed worse in the subsequent exam, and this seemed to be fully explained by their feeling unprepared for the exam (the researchers speculated that such feelings could increase anxiety and self-doubt, thus harming exam performance).

In a final study, one week before an exam, the researchers corrected students’ misconceptions about the average exam study time and this had the hoped-for effect of correcting pluralistic ignorance about normal study behaviour; it also removed any links between beliefs about typical study time and feelings of unpreparedness.

Most promisingly, average exam performance was superior after this intervention, as compared with performance in a similar exam earlier in the semester, suggesting that correcting misconceptions about others’ study behaviour is beneficial (perhaps learning the truth about how much their peers studied gave the students a chance to adjust their own study behaviour, and this may have boosted the confidence of those who would otherwise have overestimated average study time. However this wasn’t tested in the study so remains speculative).

Of course another explanation for the improved performance could just have been due to practice effects through the semester, but it’s notable that such an improvement in the late-semester exam was not observed in earlier years when the study-time-beliefs intervention was not applied.

Future research will be needed to confirm the robustness of these findings, including in more diverse student groups, and to test the casual role of beliefs about study time and feelings of preparedness, for example by directly observing how correcting misconceptions affects students’ study behaviour and their confidence.

For now, Buzinksi and his colleagues recommend it could be beneficial to use class discussions “…to correct potentially detrimental misperceptions”. They added: “Unless we as educators actively intervene, our students will approach their coursework from an understanding based upon flawed perceptions of the classroom norm, and those most at risk may suffer the most from their shared ignorance.”

Insidious Assumptions
How Pluralistic Ignorance of Studying Behavior Relates to Exam Performance

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/434H6Zg3yp8/

A cartography of consciousness – researchers map where subjective feelings are located in the body

F3.large.jpg
Bodily feeling maps, from Nummenmaa et al, 2018

By guest blogger Mo Costandi

“How do you feel?” is a simple and commonly asked question that belies the complex nature of our conscious experiences. The feelings and emotions we experience daily consist of bodily sensations, often accompanied by some kind of thought process, yet we still know very little about exactly how these different aspects relate to one another, or about how such experiences are organised in the brain.  

Now, reporting their results in PNAS, a team of researchers in Finland, led by neuroscientist Lauri Nummenmaa of the University of Turku, has produced detailed maps of what they call the “human feeling space”, showing how each of dozens of these subjective feelings is associated with a unique set of bodily sensations.

In 2014, Nummenmaa and his colleagues published bodily maps of emotions showing the distinct bodily sensations associated with six basic emotions, such as anger, fear, happiness and sadness, and seven complex emotional states, such as anxiety, love, pride, and shame. 

Building on this earlier work, for their new research they recruited 1,026 participants and asked them to complete an online survey designed to assess how they perceive 100 “core” subjective feelings, compiled from the American Psychological Association’s Dictionary of Psychology, ranging from homeostatic states such as hunger and thirst, to emotional states such as anger and pleasure, and cognitive functions such as imagining and remembering. 

The participants were shown a list of the 100 core feelings on the computer screen, and asked to drag and drop each one into a box, placing similar feelings close to each other (try it for yourself). They also had to rate each feeling according to how much it is experienced in the body, how much of it is psychological, how pleasant it feels, and how much control they think they have over it. 

Their descriptions of the core feelings clustered into five distinct groups, based on similarity: Positive emotions, such as happiness and togetherness; negative emotions, such as fear and shame; thought processes, such as hearing and memorising; homeostatic sensations, such as hunger and thirst; and sensations associated with illness, such as coughing and sneezing. 

In another online experiment, Nummenmaa and his colleagues asked the participants to indicate exactly where in the body they felt each state, by colouring in a blank body shape, allowing them to map the bodily sensations associated with the each of the 100 core feelings.

The researchers then pooled these data to create “bodily sensation maps” for each of the core feelings (see image, above). For example, the participants localised the feeling of anger to the head, chest, and hands; feelings of hunger and thirst to the stomach and throat, respectively; and the feelings of ‘being conscious’, imagining, and remembering entirely to the head.

The maps showed that, despite the similarities, each core feeling was associated with a unique set of bodily sensations. For example, participants reported perceiving anger mostly in the head and hands, anxiety mostly in the chest; and sadness in the chest and head. Although similar feelings produced similar body maps, the intensity and precise distribution of bodily sensations associated with each was unique.

That both anger and fear were associated with intense bodily sensations in the head and chest adds to past work showing that both these emotions involve remarkably similar physiological changes to the body, and further explains why we usually have to depend on context to help us interpret the emotional meaning of our sensations.

The new results provide yet more evidence for the emerging idea that the body plays a crucial role in cognitive and emotional processes – something which has, until very recently, been overlooked. “In other words,” says study co-author Riita Hari, “the human mind is strongly embodied.” 

Maps of subjective feelings

Post written by Mo Costandi (@Mocost) for the BPS Research Digest. Mo trained as a developmental neurobiologist and now works as a freelance writer specialising in neuroscience. He wrote the Neurophilosophy blog, hosted by The Guardian, and is the author of 50 Human Brain Ideas You Really Need to Know, and Neuroplasticity.

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/pwtsjdbi7mE/

Shame may feel awful but new cross-cultural evidence shows it is fundamental to our survival

Screenshot 2018-10-10 09.23.17.png
The 15 sites the researchers visited to study shame, from Sznycer et al 2018

By Emma Young

Shame feels so awful it’s hard to see how it could have an upside, especially when you consider specific triggers of the emotion – such as body-shaming, which involves criticising someone for how their body looks. But is shame always an ugly emotion that we should try to do away with? Or can it be helpful? 

The answer, according to a new study published in PNAS of 899 people from all over the world is that, as an emotion, shame can not only be useful but is fundamental to our ability to survive and thrive in a group. The essential job of shame, it seems, is to stop us from being too selfish for our own good. 

Daniel Sznycer at the University of Montreal, Canada, and his colleagues interviewed people living in 15 very different small-scale societies, including in the Andes in Ecuador, a remote region of Siberia, and the Indian Ocean island of Mauritius. 

The researchers asked one group from each society for their thoughts on 12 hypothetical situations involving a person of the same sex as them, including how much shame this person should feel if he or she was ugly, or lazy, or stole from someone in the community, for example. Participants were also asked to indicate, using a four-point scale, how negatively they would view this person as a result (thus providing an indication of how much that person would be “devalued” by others). The researchers also asked members of a fresh group of participants in each society to indicate, again on a four-point scale, how much shame they would themselves feel in the various hypothetical situations. 

Overall, the researchers found very close agreement between the degree of felt shame that participants estimated being associated with a given act or state and how much they indicated a person would be de-valued as a result of committing that act/ being in that state. This was particularly true within a society, but it also held across societies. “The fact that the same pattern is encountered in such mutually remote communities suggests that shame’s match to audience devaluation is a design feature crafted by [natural] selection, and not a product of cultural contact or convergent cultural evolution,” the researchers write.

Our ancestors lived in small, close-knit bands, and they depended on each other for survival. In bad times, especially, they had to rely on each other to pull through. Always being selfless wouldn’t have been wise, as the individual would likely have been exploited. But for someone always to act contrary to the group’s ideas of what mattered, and what was important (that all members should contribute to the tasks important for survival, for example), would have been a bad move, too, as they could have found themselves shunned or even exiled. 

To thrive, the researchers argue, a person would have had to accurately weigh the payoff of an act (taking food without telling others, or pretending to be sick instead of foraging or hunting, for instance) against the cost if they were found out. The results of the study suggest that shame evolved to help us to make the right decision – to act in our own long-term interests by not seriously jeopardising our place in our social group. Shame, then, functions like pain – as a warning not to repeat a behaviour that threatens our own wellbeing. 

This doesn’t mean, of course, that shame is always good. If your group has badly skewed ideas about what really matters – if it places a high value on what clothes you wear, or what your body looks like, for example – then shame is skewed too, into something that isn’t helpful, but harmful. 

Cross-cultural invariances in the architecture of shame

Emma Young (@EmmaELYoung) is Staff Writer at BPS Research Digest

Article source: http://feedproxy.google.com/~r/BpsResearchDigest/~3/Y29tZMW18TY/