Subconscious learning shapes pain responses

first_imgShare Pinterest Share on Twitter Share on Facebook In a new study led from Sweden’s Karolinska Institutet, researchers report that people can be conditioned to associate images with particular pain responses – such as improved tolerance to pain – even when they are not consciously aware of the images. The findings are being published in the journal PNAS.Previous studies have shown that a person’s pain experience can be increased or decreased by associating a specific cue, such as an image, with high or low intensity pain. However, until now it has been unclear if it is necessary to be consciously aware of the cue in order to learn the association. In this recent study, Dr Karin Jensen and colleagues tested whether unconscious learning affected pain responses, by using subliminal images and training participants to associate a certain image with high pain and another image with low pain.The study involved 49 participants in all, randomly assigned into four experimental groups that would elucidate the impact of different levels of conscious awareness during the experiment. All participants were generally healthy, with no chronic illnesses or psychiatric diagnoses. None of the participants reported receiving any medication apart from hormonal contraceptives.center_img Email In the experiment, images of different faces were presented on a computer screen. To some of the participants the images were shown so quickly that they could not be consciously recognized. For each image exposure, participants were subjected to pain stimulation and asked to rate the pain according to a specific scale. As each image was repeatedly associated with either high or low pain, it turned into a high pain cue or a low pain cue that would affect the participants’ expectations.The results suggest that pain cues could be learned without conscious awareness, as participants reported increased pain when shown the high pain image and reduced pain when shown the low pain image during identical levels of pain stimulation, regardless of whether or not the images were shown subliminally,“These results demonstrate that pain responses can be shaped by learning that takes place outside conscious awareness, suggesting that unconscious learning may have an extensive effect on higher cognitive processes in general”, says Karin Jensen. LinkedInlast_img read more

Read More »

Women more likely than men to initiate divorces, but not non-marital breakups

first_imgPinterest Share on Twitter Share LinkedIn Share on Facebookcenter_img Women are more likely than men to initiate divorces, but women and men are just as likely to end non-marital relationships, according to a new study that will be presented at the 110th Annual Meeting of the American Sociological Association (ASA).“The breakups of non-marital heterosexual relationships in the U.S. are quite gender neutral and fairly egalitarian,” said study author Michael Rosenfeld, an associate professor of sociology at Stanford University. “This was a surprise because the only prior research that had been done on who wanted the breakup was research on marital divorces.”Rosenfeld’s analysis relies on data from the 2009-2015 waves of the nationally representative How Couples Meet and Stay Together survey. He considers 2,262 adults, ages 19 to 94, who had opposite sex partners in 2009. By 2015, 371 of these people had broken up or gotten divorced. As part of his analysis, Rosenfeld found that women initiated 69 percent of all divorces, compared to 31 percent for men. In contrast, there was not a statistically significant difference between the percentage of breakups initiated by unmarried women and men, regardless of whether they had been cohabitating with their partners.Social scientists have previously argued that women initiate most divorces because they are more sensitive to relationship difficulties. Rosenfeld argues that were this true, women would initiate the breakup of both marriages and non-marital relationships at equal rates.“Women seem to have a predominant role in initiating divorces in the U.S. as far back as there is data from a variety of sources, back to the 1940s,” Rosenfeld said. “I assumed, and I think other scholars assumed, that women’s role in breakups was an essential attribute of heterosexual relationships, but it turns out that women’s role in initiating breakups is unique to heterosexual marriage.”Perhaps women were more likely to initiate divorces because, as Rosenfeld found, married women reported lower levels of relationship quality than married men. In contrast, women and men in non-marital relationships reported equal levels of relationship quality.Rosenfeld said his results support the feminist assertion that some women experience heterosexual marriage as oppressive or uncomfortable.“I think that marriage as an institution has been a little bit slow to catch up with expectations for gender equality,” Rosenfeld said. “Wives still take their husbands’ surnames, and are sometimes pressured to do so. Husbands still expect their wives to do the bulk of the housework and the bulk of the childcare. On the other hand, I think that non-marital relationships lack the historical baggage and expectations of marriage, which makes the non-marital relationships more flexible and therefore more adaptable to modern expectations, including women’s expectations for more gender equality.” Emaillast_img read more

Read More »

Why do people risk their lives – or the lives of others – for the perfect selfie?

first_imgShare Pinterest Share on Facebook While both animal deaths elicited widespread anger, humans have been more likely to put their own lives at risk in order to snap the perfect photograph. In 2015, Russian authorities even launched a campaign warning that “A cool selfie could cost you your life.”The reason? Police estimate nearly 100 Russians have died or suffered injuries from attempting to take “daredevil” selfies, or photos of themselves in dangerous situations. Examples include a woman wounded by a gunshot (she survived), two men blown up holding grenades (they did not), and people taking pics on top of moving trains.Heights have also resulted in selfie fatalities. A Polish tourist in Seville, Spain fell off a bridge and died attempting to take a selfie. And a Cessna pilot lost control of his plane – killing himself and his passengers – while trying to take a selfie in May of 2014.Putting oneself in harm’s way is not the only way our selfie obsession has resulted in death. One male teen – who allegedly suffering from body dysmorphic disorder – attempted suicide after spending hundreds of hours trying to take an “ideal” selfie.People who frequently post selfies are often targets for accusations of narcissism and tastelessness. But with social networking apps like Snapchat becoming more and more popular, selfies are only proliferating.So what’s going on here? What is it about the self-portrait that’s so resonant as a form of communication? And why, psychologically, might someone feel so compelled to snap the perfect selfie that they’d risk their life, or the lives of others (animals included)?While there are no definitive answers, as a psychologist I find these questions – and this unique 21st-century phenomomenon – worth exploring further.A brief history of the selfieRobert Cornelius, an early American photographer, has been credited with taking the first selfie: in 1839, Cornelius, using one of the earliest cameras, set up his camera and ran into the shot.The broader availability of point-and-shoot cameras in the 20th century led to more self-portraits, with many using the (still) popular method of snapping a photograph in front of a mirror.Selfie technology took a giant leap forward with the invention of the camera phone. Then, of course, there was the introduction of the selfie stick. For a brief moment the stick was celebrated: Time named it one of the 25 best inventions of 2014. But critics quickly dubbed it the Naricisstick and the sticks are now banned in many museums and parks, including Walt Disney Resort.Despite the criticism directed at selfies, their popularity is only growing.Conclusive numbers seem lacking, with estimates of daily selfie posts ranging from one million to as high as 93 million on Android devices alone.Whatever the true number, a Pew survey from 2014 suggests the selfie craze skews young. While 55 percent of millennials reported sharing a selfie on a social site, only 33 percent of the silent generation (those born between 1920 and 1945) even knew what a selfie was.A British report from this year also suggests younger women are more active participants in selfie-taking, spending up to five hours a week on self-portraits. The biggest reason for doing so? Looking good. But other reasons included making others jealous and making cheating partners regret their infidelities.According to one study, young women spend up to five hours per week taking selfies. (Katie Hughes)Confidence booster or instrument of narcissism?Some do see selfies as a positive development.Psychology professor Pamela Rutledge believes they celebrate “regular people.” And UCLA psychologist Andrea Letamendi believes that selfies “allow young adults to express their mood states and share important experiences.”Some have argued that selfies can boost confidence by showing others how “awesome” you are, and can preserve important memories.Still, there are plenty of negative associations with taking selfies. While selfies are sometimes lauded as a means for empowerment, one European study found that time spent looking at social media selfies is associated with negative body image thoughts among young women.Apart from injuries, fatalities and tastelessness, one big issue with selfies appears to be their function as either a cause or consequence of narcissism.Peter Gray, writing for Psychology Today, describes narcissism as “an inflated view of the self, coupled with a relative indifference to others.”Narcissists tend to overrate their talents and respond with anger to criticism. They are also more likely to bully and less likely to help others. According to Gray, surveys of college students show the trait is far more prevalent today than even as recently as 30 years ago.Hey – look at me! (Fab Magazine)Do selfies and narcissism correlate? Psychologist Gwendolyn Seidman suggests that there’s a link. She cites two studies that examined the prevalence of Facebook selfies in a sample of over 1,000 people.Men in the sample who posted a greater number of selfies were more likely to show evidence of narcissism. Among female respondents, the number of selfie posts was associated only with a subdimension of narcissism called “admiration demand,” defined as “feeling entitled to special status or privileges and feeling superior to others.”Bottom line: selfies and narcissism appear to be linked.How we stack up against othersSelfies seem to be this generation’s preferred mode of self-expression.Psychologists who study the self-concept have suggested that our self-image and how we project it is filtered through two criteria: believability (how credible are the claims I make about myself) and beneficiality (how attractive, talented and desirable are the claims I make about myself).In this sense, the selfie is the perfect medium: it’s an easy way to offer proof of an exciting life, extraordinary talent and ability, unique experiences, personal beauty and attractiveness.As a psychologist, I find it important not only to ask why people post selfies, but also to ask why anyone bothers looking at them.Evidence suggests that people simply like viewing faces. Selfies attract more attention and more comments than any other photos, and our friends and peers reinforce selfie-taking by doling out “likes” and other forms of approval on social media.One explanation for why people are so drawn to looking at selfies could be a psychological framework called social comparison theory.The theory’s originator, Leon Festinger, proposed that people have an innate drive to evaluate themselves in comparison with others. This is done to improve how we feel about ourselves (self-enhancement), evaluate ourselves (self-evaluation), prove we really are the way we think we are (self-verification) and become better than we are (self-improvement).It’s a list that suggests a range of motives that appear quite positive. But reality, unfortunately, is not so upbeat. Those most likely to post selfies appear to have lower self-esteem than those who don’t.In sum, selfies draw attention, which seems like a good thing. But so do car accidents.The approval that comes from “likes” and positive comments on social media is rewarding – particularly for the lonely, isolated or insecure.However, the evidence, on balance (combined with people and animals dying!), suggests there is little to celebrate about the craze.By Michael Weigold, Professor of Advertising, University of FloridaThis article was originally published on The Conversation. Read the original article. LinkedIncenter_img Share on Twitter Email 2016 hasn’t been a great year for the selfie.In February, Argentinian tourists passed around a baby La Plata dolphin in order to take selfies with it. The endangered animal subsequently died from stress and heat exhaustion.Then, in early March, a swan died after a tourist dragged it from a lake in Macedonia – all for the sake of a selfie.last_img read more

Read More »

Stereotype threat can make female game players worse at gaming

first_imgWomen in the first group, who thought that the high scorers were almost all men, showed signs of experiencing stereotype threat. They reported less confidence in their abilities, greater anxiety before playing, and had significantly worse scores than those in the other groups.Importantly, these effects were strongest among the most experienced players. Stereotype threat caused greater anxiety about playing, and affected game scores the most, for women who were frequent online gamers. The impact was even greater still for those who strongly identified themselves with the label “gamer.”The authors of the study suggest that the phenomenon of stereotype threat may help to understand why women are less likely than men to describe themselves as gamers, even when they spend equal amounts of time playing games. Distancing themselves from the gamer identity may be a defense mechanism lessening the psychological impact of negative stereotypes against female gamers. The experiment also helps to show how these negative expectations can become self-fulfilling prophesies. Female video game players perform worse and feel worse about themselves when they are reminded of negative gender stereotypes, according to a study published in Computers in Human Behavior.Recent controversies have highlighted the pervasiveness of hostility and negative attitudes towards female gamers, who tend to be perceived as less competent players than male gamers, and as not fitting into the world of video gaming. A great deal of research in other domains, including education and work, has established the importance of a psychological phenomenon known as stereotype threat. When people who belong to a stereotyped group are put in a position in which they feel they are acting as a representative of that group, they tend to feel anxiety about the risk of confirming those negative stereotypes in others’ eyes. That anxiety can lead to worse performance which can ironically reinforce the same negative group stereotype.A team of researchers led by Lotte Vermeulen, of iMinds-MICT-Ghent University, conducted an experiment to study how female players are affected by stereotype threat in the context of online gaming. One hundred women were recruited online, and were randomly assigned to three groups. The women first played a practice round of a puzzle-platform game. After the first round, they viewed a list of high scorers for the game. One-third of the women saw a list that was dominated by male names and represented by male avatars. The second group of one-third saw a similar list dominated by female players. The list viewed by the final group contained gender-neutral names with no avatars. After viewing the list of current leaders, the women were instructed to play the game again with the goal of beating the high score. Share on Twitter Share LinkedIncenter_img Email Share on Facebook Pinterestlast_img read more

Read More »

Elite cyclists are more resilient to mental fatigue, study finds

first_imgShare LinkedIn Share on Facebook Share on Twitter In addition, the professional cyclists performed better than the recreational cyclists in the computerised cognitive task which measure ‘inhibitory control’ or willpower. This is not surprising as the ability to suffer is a major factor in the sport of cycling .Professor Marcora, says that the two effects go hand in hand, because becoming resistant to mental fatigue should bolster willpower during the latter stages of a competition such as the Tour de France.Although largely hereditary, he speculates that superior willpower and resistance to mental fatigue may be trained through hard physical training and the demanding lifestyle of elite endurance athletes. Professor Marcora is also developing, in collaboration with the Ministry of Defence, a new training method (Brain Endurance Training) to boost resistance to mental fatigue and endurance performance even further.center_img Email As British cyclist Chris Froome celebrates his third Tour de France victory, research from the University of Kent and Australian collaborators shows for the first time that elite endurance athletes have superior ability to resist mental fatigue.Professor Samuele Marcora, Director of Research in Kent’s School of Sport and Exercise Sciences, co-authored a report in the journal PLOS ONE entitled Superior Inhibitory Control and Resistance to Mental Fatigue in Professional Road Cyclists.For the study, Professor Marcora and colleagues compared the performance of 11 professional cyclists and nine recreational cyclists in various tests. As expected, the professional cyclists outperformed the recreational cyclists in a simulated time trial in the laboratory. The new finding was that while the recreational cyclists slowed down after performing a computerised cognitive task to induce mental fatigue, the professional cyclists’ time trial performance was not affected. Pinterestlast_img read more

Read More »

Study IDs key indicators linking violence and mental illness

first_imgLinkedIn Share New research from North Carolina State University, RTI International, Arizona State University and Duke University Medical Center finds a host of factors that are associated with subsequent risk of adults with mental illness becoming victims or perpetrators of violence. The work highlights the importance of interventions to treat mental-health problems in order to reduce community violence and instances of mental-health crises.“This work builds on an earlier study that found almost one-third of adults with mental illness are likely to be victims of violence within a six-month period,” says Richard Van Dorn, a researcher at RTI and lead author of a paper describing the work. “In this study, we addressed two fundamental questions: If someone is victimized, is he or she more likely to become violent? And if someone is violent, is he or she more likely to be victimized? The answer is yes, to both questions.”The researchers analyzed data from a database of 3,473 adults with mental illnesses who had answered questions about both committing violence and being victims of violence. The database drew from four earlier studies that focused on issues ranging from antipsychotic medications to treatment approaches. Those studies had different research goals, but all asked identical questions related to violence and victimization. For this study, the researchers used a baseline assessment of each study participant’s mental health and violence history as a starting point, and then tracked the data on each participant for up to 36 months. Email Share on Facebookcenter_img Share on Twitter Specifically, the researchers assessed each individual’s homelessness, inpatient mental-health treatment, psychological symptoms of mental illness, substance use and as victims or perpetrators of violence. The researchers evaluated all of these items as both indicators and outcomes – i.e., as both causes and effects.“We found that all of these indicators mattered, but often in different ways,” says Sarah Desmarais, an associate professor of psychology at NC State and co-author of the paper. “For example, drug use was a leading indicator of committing violence, while alcohol use was a leading indicator of being a victim of violence.”However, the researchers also found that one particular category of psychological symptoms was also closely associated with violence: affective symptoms.“By affect, we mean symptoms including anxiety, depressive symptoms and poor impulse control,” Desmarais says. “The more pronounced affective symptoms were, the more likely someone was to both commit violence and be a victim of violence.“This is particularly important because good practices already exist for how to help people, such as therapeutic interventions or medication,” she adds. “And by treating people who are exhibiting these symptoms, we could reduce violence. Just treating drug or alcohol use – which is what happens in many cases – isn’t enough. We need to treat the underlying mental illness that is associated with these affective symptoms.”The research also highlighted how one violent event could cascade over time.For example, on average, the researchers found that one event in which a person was a victim of violence triggered seven other effects, such as psychological symptoms, homelessness and becoming perpetrators of violence. Those seven effects, on average, triggered an additional 39 additional effects.“It’s a complex series of interactions that spirals over time, exacerbating substance use, mental-health problems and violent behavior,” Van Dorn says.“These results tell us that we need to evaluate how we treat adults with severe mental illness,” he adds.“Investing in community-based mental health treatment programs would significantly reduce violent events in this population,” says Desmarais. “That would be more effective and efficient than waiting for people to either show up at emergency rooms in the midst of a mental-health crisis or become involved in the legal system as either victims or perpetrators of violence.“We have treatments for all of these problems, we just need to make them available to the people that need them,” Desmarais says. Pinterestlast_img read more

Read More »

Why stories matter for children’s learning

first_imgPinterest Share on Facebook Share on Twitter Share LinkedIncenter_img Ever wondered why boys and girls choose particular toys, particular colors and particular stories? Why is it that girls want to dress in pink and to be princesses, or boys want to be Darth Vader, warriors and space adventurers?Stories told to children can make a difference.Scholars have found that stories have a strong influence on children’s understanding of cultural and gender roles. Stories do not just develop children’s literacy; they convey values, beliefs, attitudes and social norms which, in turn, shape children’s perceptions of reality. Email I found through my research that children learn how to behave, think, and act through the characters that they meet through stories.So, how do stories shape children’s perspectives?Why stories matterStories – whether told through picture books, dance, images, math equations, songs or oral retellings – are one of the most fundamental ways in which we communicate.Nearly 80 years ago, Louise Rosenblatt, a widely known scholar of literature, articulated that we understand ourselves through the lives of characters in stories. She argued that stories help readers understand how authors and their characters think and why they act in the way they do.Similarly, research conducted by Kathy Short, a scholar of children’s literature, also shows that children learn to develop through stories a critical perspective about how to engage in social action.Stories help children develop empathy and cultivate imaginative and divergent thinking – that is, thinking that generates a range of possible ideas and/or solutions around story events, rather than looking for single or literal responses.Impact of storiesSo, when and where do children develop perspectives about their world, and how do stories shape that?Studies have shown that children develop their perspectives on aspects of identity such as gender and race before the age of five.A key work by novelist John Berger suggests that very young children begin to recognize patterns and visually read their worlds before they learn to speak, write or read printed language. The stories that they read or see can have a strong influence on how they think and behave.For example, research conducted by scholar Vivian Vasquez shows that young children play out or draw narratives in which they become part of the story. In her research, Vasquez describes how four-year-old Hannah mixes reality with fiction in her drawings of Rudolph the reindeer. Hannah adds a person in the middle with a red X above him, alongside the reindeer.My own research has yielded similar insights. I have found that children internalize the cultural and gender roles of characters in the stories.Vasquez explains that Hannah had experienced bullying by the boys in the class and did not like seeing that Rudolph was called names and bullied by other reindeer when she read Rudolph the Red-Nosed Reindeer. Vasquez suggests that Hannah’s picture conveyed her desire not to have the boys tease Rudolph, and more importantly, her.In one such study that I conducted over a six-week period, third grade children read and discussed the role of male and female characters through a number of different stories.Children then reenacted gender roles (eg, girls as passive; evil stepsisters). Later, children rewrote these stories as “fractured fairy tales.” That is, children rewrote characters and their roles into those that mirrored present-day roles that men and women take on. The roles for girls, for example, were rewritten to show they worked and played outside the home.Subsequently, we asked the girls to draw what they thought boys were interested in and boys to draw what they thought girls were interested in.We were surprised that nearly all children drew symbols, stories and settings that represented traditional perceptions of gendered roles. That is, boys drew girls as princesses in castles with a male about to save them from dragons. These images were adorned with rainbows, flowers and hearts. Girls drew boys in outdoor spaces, and as adventurers and athletes.Even though he engaged in discussions on how gender should not determine particular roles in society (eg, women as caregivers; men as breadwinners), his image suggests that reading traditional stories, such as fairy tales, contributes to his understanding of gender roles.For example, look at the image here, drawn by an eight-year-old boy. It depicts two things: First, the boy recreates a traditional storyline from his reading of fairy tales (princess needs saving by a prince). Second, he “remixes” his reading of fairy tales with his own real interest in space travel.Our findings are further corroborated by the work of scholar Karen Wohlwend, who found a strong influence of Disney stories on young children. In her research, she found that very young girls, influenced by the stories, are more likely to become “damsels in distress” during play.However, it is not only the written word that has such influence on children. Before they begin to read written words, young children depend on pictures to read and understand stories. Another scholar, Hilary Janks, has shown that children interpret and internalize perspectives through images – which is another type of storytelling.Stories for changeScholars have also shown how stories can be used to change children’s perspectives about their views on people in different parts of the world. And not just that; stories can also influence how children choose to act in the world.For example, Hilary Janks works with children and teachers on how images in stories on refugees influence how refugees are perceived.Kathy Short studied children’s engagement with literature around human rights. In their work in a diverse K-5 school with 200 children, they found stories moved even such such young children to consider how they could bring change in their own local community and school.These children were influenced by stories of child activists such as Iqbal, a real-life story of Iqbal Masih, a child activist who campaigned for laws against child labor. (He was murdered at age 12 for his activism.) Children read these stories along with learning about human rights violations and lack of food for many around the world. In this school, children were motivated to create a community garden to support a local food bank.Building intercultural perspectivesToday’s classrooms represent a vast diversity. In Atlanta, where I teach and live, in one school cluster alone, children represent over 65 countries and speak over 75 languages.Indeed, the diversity of the world is woven into our everyday lives through various forms of media.When children read stories about other children from around the world, such as “Iqbal,” they learn new perspectives that both extend beyond beyond and also connect with their local contexts.At a time when children are being exposed to negative narratives about an entire religious group from US presidential candidates and others, the need for children to read, see, and hear global stories that counter and challenge such narratives is, I would argue, even greater.By Peggy Albers, Professor of language and literacy education, Georgia State UniversityThis article was originally published on The Conversation. Read the original article.last_img read more

Read More »

Developmental psychologist: Five-month-old babies know what’s funny

first_imgShare Before they speak or crawl or walk or achieve many of the other amazing developmental milestones in the first year of life, babies laugh. This simple act makes its debut around the fourth month of life, ushering in a host of social and cognitive opportunities for the infant. Yet despite the universality of this humble response and its remarkable early appearance, infant laughter has not been taken seriously. At least, not until recently. In the past decade, researchers have started to examine what infant laughter can reveal about the youngest minds, whether infants truly understand funniness, and if so, how.Prompted by observations of infant laughter made by none other than Charles Darwin himself, modern psychologists have begun to ask whether infant laughter has a purpose or can reveal something about infants’ understanding of the world. Darwin speculated that laughter, like other universal emotional expressions, serves an important communicative function, which explains why nature preserved and prioritised it. Two key pieces of evidence support Darwin’s hunch. First, according to the psychologist Jaak Panksepp of Bowling Green State University in Ohio, laughter is not uniquely human. Its acoustic, rhythmic, and facial precursors appear in other mammals, particularly in juveniles while they are at play, pointing to the role of evolution in human laughter.Second, the pleasure of laughter is neurologically based. It activates the dopamine (‘reward’) centre of the brain. Laughing – in many ways – has the same effect on social partners as playing. While the pleasure of playing is a way for juveniles to bond with each other, the pleasure of laughing is a way for adults to do so, as across mammalian species, adults rarely ‘play’. Shared laughter is as effective as playing in finding others to be a source of joy and satisfaction. Thus laughter biologically reinforces sociability, ensuring the togetherness needed for survival. Email Pinterest Share on Twittercenter_img Share on Facebook LinkedIn However, laughter is not only key to survival. It also is key to understanding others, including what it reveals about infants. For example, infants can employ fake laughter (and fake crying!) beginning at about six months of age, and do so when being excluded or ignored, or when trying to engage a social partner. These little fake-outs show that infants are capable of simple acts of deception much earlier than scholars previously thought, but which parents knew revealed infants’ cleverness. Similarly, the psychologist Vasu Reddy of the University of Portsmouth has found that, by eight months, infants can use a specific type of humour: teasing. For example, the baby might willingly hand over the car keys she’s been allowed to play with, but whip her hand back quickly, just before allowing her dad to take possession, all the while looking at him with a cheeky grin. Reddy calls this type of teasing ‘provocative non-compliance’. She has found that eight- to 12-month-olds use other types of teasing as well, including provocative disruption, as in toppling over a tower someone else has carefully built.Teasing is the infant’s attempt to playfully provoke another person into interacting. It shows that infants understand something about others’ minds and intentions. In this example, the infant understands that she can make her father think that she will relinquish the car keys. The ability to trick others in this way suggests that infants are maturing toward a Theory of Mind, the understanding that others have minds that are separate from one’s own and that can be fooled. Psychologists have generally thought children don’t reach this milestone until about four and a half years of age. Infants’ ability to humorously tease reveals they are progressing toward a Theory of Mind much earlier than previously thought.Additional evidence for this early Theory of Mind comes from studies showing that infants are quite capable of intentionally making others laugh, also by about the age of eight months. Infants do so by making silly faces and sounds, by performing absurd acts such as exposing hidden body parts or waving their stinky feet in the air, and by initiating games such as peekaboo that have previously invoked laughter. Knowing what another will find funny implies that infants understand something about another person, and use that understanding to their joyful advantage. This attempt to make others laugh is not seen among children and adults with autism, one feature of which is an impaired understanding of others’ social and emotional behaviours. Individuals with autism do laugh, but tend to do so in isolation or in response to stimuli that don’t elicit laughter in people without autism. They might mimic laughter, but not share it. In a sense, their laughter is non-social.Perhaps because infants are so young, we have been reluctant to credit them with understanding ‘funniness’. Their laughs are more often attributed to ‘gas’ (a myth long ago dispelled) or imitation, or having been reinforced for laughing in response to certain events – like Mom singing in an ‘opera voice’. As it turns out, getting the joke doesn’t require advanced cognitive skills. And much of what it does require is within the infant’s grasp.Although infants do imitate smiling, starting in the first few months of life, and prefer to look at smiles compared with negative emotional expressions, and although they might be reinforced for laughing at particular events, these are not sufficient explanations for infant laughter and humour. If they were, then imitation and reinforcement would need to account for most infant laughter, and this is simply not the case in life or in the research lab. In addition, it would suggest that infants are not capable of understanding new humorous events unless someone were available to interpret for them and/or to reinforce their laughter. Instead, research has shown that, within the first six months of life, infants can interpret a new event as funny all by themselves.So how do they do it? Like children and adults, infants appear to rely on two key features to detect funniness. First, humour nearly always requires a social component. Using naturalistic observations, the psychologists Robert Kraut and Robert Johnston at Cornell, and later the neuroscientist Robert Provine at the University of Maryland, discovered that smiling is more strongly associated with the presence of other people, and only erratically associated with feelings of happiness. That is, smiling is more likely to be socially rather than emotionally motivated. Thus, the presence of a social partner is one key component of finding something funny. Recall that the point of laughter is to be shared.But humour has a cognitive element too: that of incongruity. Humorous events are absurd iterations of ordinary experiences that violate our expectations. When a banana is used as a phone, when a large burly man speaks in a Mini Mouse voice, when 20 clowns emerge from a tiny car, we are presented with something bizarre and irrational, and are left to make sense of it. Infants, too, engage in this process.We showed six-month-olds ordinary events (a researcher pretending to drink from a red plastic cup) and absurd iterations of those events (the researcher pretending to wear the red cup as a hat). In one condition, we instructed parents to remain emotionally neutral during the absurd event. Not only did infants find the absurd version of the event funny, they found it funny even when their parents remained neutral. That is, infants did not rely on their parents’ interpretation of the event as ‘funny’ to find it humorous themselves. When repeated with five-month-olds, we got the same results. Even with only a month of laughter experience under their belts, five-monthers independently interpreted the funniness of an event.However, detecting incongruity isn’t the end of the story. Magical events are similarly incongruous, but adults, children and even infants do not laugh at them. Elizabeth Spelke of Harvard and Renée Baillargeon of the University of Illinois have observed that when natural laws are violated – a ball disappears into thin air or an object passes through a solid barrier – infants behave exactly as adults and children do: they don’t laugh, they stare. Why? Humour researchers theorise that although magic and humour both involve incongruity, only humour involves its resolution. In jokes, the resolution comes in the form of a punchline. It’s the ‘Ah-ha!’ moment when one gets the joke. It’s not known if infants are able to resolve incongruity, but that they laugh at humour and stare at magic suggests that they can. Perhaps they can simply distinguish that humorous events are possible and magical events are not, and this is enough to make the former funny. It’s up to researchers to solve this next piece of the puzzle. Until then, infants will have the last laugh.By Gina MireaultThis article was originally published at Aeon and has been republished under Creative Commons.last_img read more

Read More »

This fascinating concept could help us better understand why belief in God is so widespread

first_imgShare LinkedIn Share on Twitter Psychology research published in the journal Religion, Brain & Behavior provides new clues as to why some individuals believe in a god while others do not.The two-part study of 316 Americans found that religious “credibility-enhancing displays” (CREDs) were positively linked to the belief in God and religiosity.In the study, the survey assessing “credibility-enhancing displays” included questions such as: “To what extent did your caregiver(s) act fairly to others because their religion taught them so?”, “To what extent did your caregiver(s) live a religiously pure life?” and “To what extent did your caregiver(s) avoid harming others because their religion taught them so?” Emailcenter_img Pinterest Individuals who were exposed to more of these religious CREDs tended to have a higher certainty in the existence of God. Conversely, those exposed less religious CREDs tended to have a higher certainty in the non-existence of God.PsyPost interviewed the study’s corresponding author, Dr. Jonathan Lanman of Queen’s University Belfast. Read his explanation of the research below:PsyPost: Why were you interested in this topic?Lanman: I became interested in explaining who becomes a theist and who becomes a non-theist in 2007 when I noticed a rather odd juxtaposition. On the one hand, the previous decade had seen the rapid development of the cognitive science of religion, a field aiming to explain the existence and persistence of religious belief via cognitive universals.  On the other hand, that same decade had seen a surge in both popular atheist publications (such as Sam Harris’s The End of Faith and Richard Dawkins’s The God Delusion) and the memberships of atheist and humanist groups around the world.If everyone had the universal cognitive mechanisms that drive religious belief, then why would we be seeing such a growth in atheism? I could only suspect that while cognitive universals make religious beliefs possible, they do not deterministically determine beliefs in all individuals and that other factors are necessary to explain who ends up a theist and who doesn’t.I also wasn’t satisfied with the most well-supported theory on offer, the existential security hypothesis, which holds that nations with high levels of personal and social security produce many more nontheists.  The sociological connection seemed correct to me but the psychological assumption that a need for comfort in the face of insecurity explains religious belief struck me as implausible.  Religious ideas in highly insecure places are often far from comforting (think witchcraft beliefs and angry forest spirits).   Meanwhile, it’s in the affluent West where we’ve transitioned to such comforting ideas as New Age fulfillment and hell-less Christianity.  Also, while there’s a lot of evidence for motivated reasoning in psychology, there’s little evidence to suggest that we move from a state of unbelief in some entity to a state of belief simply because it would be comforting if that entity existed.Existential security mattered, I thought, but had to be connected to theism and nontheism in some other way.  I thought that I found that other way when I came across Joe Henrich’s notion of Credibility Enhancing Displays and their role in affecting our beliefs.  If the CREDs hypothesis is correct, then we no longer need the assumption about people the world over coming to believe in supernatural agents because of a need for comfort.  Instead, we can recognize the abundant evidence suggesting that feelings of threat and insecurity increase our commitments to social groups (including religions) and our actions demonstrating such commitments.  In other words, feeling threatened and insecure increases the number and intensity of the religious CREDs people perform, which, in turn, allows for the successful transmission of religious beliefs to new generations.  High levels of existential security mean lower levels of CRED performance and, so I hypothesized, increased secularization.In short, I became interested in CREDs because I saw them as potentially being a crucial causal element in explaining differing rates of secularization around the world.What should the average person take away from your study?I think this is one of those studies where a great many people will say that the conclusion is obvious. We have such sayings as ‘practice what you preach,’ ‘walking the walk,’ and ‘actions speak louder than words’ for good reasons.  I think most people get the idea that matching your words with your actions will make you more persuasive.   This study provides firmer quantitative evidence that this is indeed the case for theistic belief in the United States.What might be new is just how important those actions are in comparison to words.  In our studies, when we put both our measure of general religious socialization (the talking the talk) and our measure of CREDs (the walking the walk) into regression models, the CREDs measure held all of the predictive power.  Actions matter more than words, but the evidence here suggests they matter dramatically more in convincing cultural learners of the existence of God.Are there any major caveats? What questions still need to be addressed?This study was only done in the United States, so we don’t know how widespread the CREDs effect is.  Recently though, Aiyana Willard and Lubomír Cingl have replicated the CREDs effect in Slovakia and the Czech Republic and Hugh Turpin has replicated it in the Republic of Ireland.  That’s still only a small sample of cultural environments, however.Another major caveat is that our studies relied on retrospective self-reports about the actions of one’s parents during one’s upbringing. Such retrospective reports can be affected by a number of other things. To really be justified in making causal claims about CREDs, we’ll need more experimental research.  Initial experiments on CREDs are promising, but we have a ways to go.Is there anything else you would like to add?The claim that exposure to CREDs is a major determining factor of who ends up a theist and who ends up a nontheist is a probabilistic claim, not a deterministic one.  Some people will get exposed to high levels of CREDs and reject theism while others will get minimal exposure and embrace it.   Further, there are certainly other factors that influence who ends up a theist and who doesn’t besides CREDs exposure, such as differences in particular cognitive biases and moral evaluations of specific religious traditions.The study, “Religious actions speak louder than words: exposure to credibility-enhancing displays predicts theism“, was also co-authored by Michael D. Buhrmester. Share on Facebooklast_img read more

Read More »

Study uncovers how echo chambers provide the initial fuel for misinformation to go viral

first_imgPinterest Email Share LinkedIn Share on Facebookcenter_img Törnberg used a computer model to simulate misinformation spreading via shares in online social networks. They found that misinformation tended to “go viral” after first resonating within a cluster of like-minded individuals.“The study finds that the presence of a group of like-minded users will tend to increase the virality of news and rumors that resonate with the views of this cluster,” Törnberg told PsyPost.“This seems unintuitive: one would think that if a group of users are less connected to others, they will have a harder time spreading their views. As I write in the article, the nature of this link is perhaps most clear through the lens of a metaphor: if we think of the viral spread of misinformation in a social network as akin to a wildfire, an echo chamber has the same effect as a dry pile of tinder in the forest; it provides the fuel for an initial small flame, that can spread to larger sticks, branches, trees, to finally engulf the forest.”“This essentially means that social media may, for structural reasons, give precedence to fringe ideas,” Törnberg remarked.To validate that these dynamics reflected what was actually happening on the internet, Törnberg plugged data from Twitter into his model to generate empirically grounded networks. But the study — like all research — has limitations.“Simulations are exceptionally powerful for identifying the link between underlying mechanisms and macro-dynamics, as they allow the isolated study of dynamic phenomena. However, like any approach based on abstraction, they are also always limited since we cannot be sure that the causal mechanism that they identify is in fact important in real-world dynamics,” Törnberg explained.“The effects may be crowded out by factors that are not included in the simulation. Additional empirical study is therefore necessary to assess the impact of these emergent dynamics.”“This paper is part of the EU research project ODYCCEUS, which focuses on using methods from the hard sciences to study societal polarization and cultural conflicts. I’m currently pursuing multiple ideas as part of this research project, some of which continues on the ideas of this paper, by modeling the dynamics of online debate,” Törnberg added.“To follow my publications, follow me on Twitter (@pettertornberg) or on ResearchGate.” New research sheds light on how the structure of online social networks causes misinformation to go viral on the internet. The findings, published in PLOS One, indicate that social “echo chambers” act like kindling that gives misinformation the initial flare up it needs to quickly spread.“While the link between ‘echo chambers’ and ‘fake news’ has been both widely discussed and been the subject of significant research effort, this discussion has largely left out the impact of the complex network dynamics that are so quintessential to online social media,” said study author Petter Törnberg, a sociologist from the University of Amsterdam.“Such dynamics are central to understanding new social media, since what receives attention on social media is largely the result of what we call ’emergent processes’, that is, the unintended and unexpected macro consequences of the micro-interactions of millions of users. This is producing certain dynamics that, while having become central to current societal trends, are exceptionally difficult to study using our traditional research tools.” Share on Twitterlast_img read more

Read More »