Connect with us

News

Study Finds Rich People Don’t Live That Much Longer Than The Poor

Published

on

New research results, published in Proceedings of the National Academy of Sciences (PNAS), challenge previous findings of huge differences in life expectancy between the rich and those at the bottom of the income scale. In real life people don´t necessarily stay poor or stay rich, as assumed in previous research, and three economists from the University of Copenhagen have now found a way to take this mobility between income-classes into account providing a more realistic way to calculate life expectancy for people from different walks of society. Their results show that in reality the difference between the lifespan of a rich and a poor person is really not that big.

In 2016 impressive work published in Journal of the American Medical Association by a Harvard research team showed that high-income people in the US can expect to live 6.5 years longer at age 40 than low-income individuals. This research gave rise to a substantial debate about inequality in health in the US.

The existing method assumes that the poor stay poor and the rich stay rich for the rest of their lives. In reality, however, over a ten-year period half of the poorest people actually move into groups with better incomes and likewise, half of the rich leak down into lower income classes. The mortality of those who move to a different income class is significantly different from those who stay in the same class.

This mobility between income classes has until now been a challenge for the ability to calculate life expectancy across different groups in the population but Danish economists Claus Thustrup Kreiner, Torben Heien Nielsen, and Benjamin Ly Serena from Center for Economic Behavior and Inequality (CEBI) at the University of Copenhagen (UCPH) have now devised a method to account for this income mobility in the relationship between income and life expectancy by incorporating a classic model of social mobility from the literature.

The authors demonstrated their approach by calculating life expectancy at age 40 in Denmark based on official income and mortality records of the entire population of Danish women and men spanning the period 1983-2013, which approximately halved the difference in life expectancy between people in low and high-income groups:

When accounting for income mobility, life expectancy for a 40-year-old man in the upper income groups is 77.6 years compared with 75.2 for a man in poorer groups — a difference of 2.4 years. For women the difference between high and low-income groups is 2.2 years. However, without taking the income mobility into account the life expectancy difference was twice as big — around five years — for both men and women. Using the method, the authors suggest that the difference in the US is three years rather than 6.5.

Our results reveal that inequality in life expectancy is significantly exaggerated when not accounting for mobility. This result is quintessential not only for our understanding of one of the most important measures of inequality in a society, namely, how long different groups can expect to live. But also by mis-measuring this type of inequality, we get to misleading conclusions about the cost and benefits of public health programs such as Medicare and social security policies. For instance, given the rich live longer, they will also benefit many more years from old-age-pension benefits, says professor Thustrup Kreiner.

The difference is growing

Even though inequality in life expectancy now proves to be only half as big as earlier anticipated, the new UCPH-research funded by the Danish National Research Foundation also shows, that the difference in life expectancy between the rich and the poor has steadily increased over the 30 years represented in the data. This is despite Denmark being a country that is internationally renowned for its free health care and education as well as a finely masked welfare-system that in many respects is thought to make up for differences in income.

The reason for this steady but strikingly growing difference in life expectancy is beyond the scope of this particular project, but the UCPH-economists point out that other research has demonstrated, how individuals that belong to high income/high education-groups also have proven to benefit more from new health technologies and seem to take more advantage of new types of treatment and prevention of disease.

News

Face It. Our Faces Don’t Always Reveal Our True Emotions

Published

on

Actor James Franco looks sort of happy as he records a video diary in the movie “127 Hours.” It’s not until the camera zooms out, revealing his arm is crushed under a boulder, that it becomes clear his goofy smile belies his agony.

That’s because when it comes to reading a person’s state of mind, visual context — as in background and action — is just as important as facial expressions and body language, according to a new study from UC Berkeley.

The findings, to appear online this week in the journal Proceedings of the National Academy of Sciences, challenge decades of research positing that emotional intelligence and recognition are based largely on the ability to read micro-expressions signaling happiness, sadness, anger, fear, surprise, disgust, contempt and other positive and negative moods and sentiments.

Implications include the potential limitations of facial recognition software.

“Our study reveals that emotion recognition is, at its heart, an issue of context as much as it is about faces,” said study lead author Zhimin Chen, a doctoral student in psychology at UC Berkeley.

Researchers blurred the faces and bodies of actors in dozens of muted clips from Hollywood movies and home videos. Despite the characters’ virtual invisibility, hundreds of study participants were able to accurately read their emotions by examining the background and how they were interacting with their surroundings.

The “affective tracking” model that Chen created for the study allows researchers to track how people rate the moment-to-moment emotions of characters as they view videos.

Chen’s method is capable of collecting large quantities of data in a short time, and could eventually be used to gauge how people with disorders like autism and schizophrenia read emotions in real time and help with their diagnoses.

“Some people might have deficits in recognizing facial expressions, but can recognize emotion from the context,” Chen said.

“For others, it’s the opposite.”

Moreover, the findings, based on statistical analyses of the ratings collected, could inform the development of facial recognition technology.

“Right now, companies are developing machine learning algorithms for recognizing emotions, but they only train their models on cropped faces and those models can only read emotions from faces,” Chen said.

“Our research shows that faces don’t reveal true emotions very accurately and that identifying a person’s frame of mind should take into account context as well.”

For the study, Chen and study senior author David Whitney, a UC Berkeley vision scientist, tested the emotion recognition abilities of nearly 400 young adults. The visual stimuli they used were video clips from various Hollywood movies as well as documentaries and home videos that showed emotional responses in more natural settings.

In the first of three experiments, 33 study participants viewed interactions in movie clips between two characters, one of which was blurred, and rated the perceived emotions of the blurred character. The results showed that study participants inferred how the invisible character was feeling based not only on their interpersonal interactions, but also from what was happening in the background.Study participants went online to view and rate the video clips. A rating grid was superimposed over the video so that researchers could track each study participant’s cursor as it moved around the screen, processing visual information and rating moment-to-moment emotions.

Next, approximately 200 study participants viewed video clips showing interactions under three different conditions: one in which everything was visible, another in which the characters were blurred, and another in which the context was blurred. The results showed that context was as important as facial recognition for decoding emotions.

In the final experiment, 75 study participants viewed clips from documentaries and home videos so that researchers could compare emotion recognition in more naturalistic settings. Again, the context was as critical for inferring the emotions of the characters as were their facial expressions and gestures.

“Overall, the results suggest that context is not only sufficient to perceive emotion, but also necessary to perceive a person’s emotion,” said Whitney, a UC Berkeley psychology professor.

“Face it, the face is not enough to perceive emotion.”

Continue Reading

News

Working Long Hours Linked To Depression In Women

Published

on

Women who work more than 55 hours a week are at a higher risk of depression but this is not the case for men, according to a new UCL-led study with Queen Mary University of London.

The study of over 20,000 adults, published today in the BMJ’s Journal of Epidemiology & Community Health, found that after taking age, income, health and job characteristics into account, women who worked extra-long hours had 7.3% more depressive symptoms than women working a standard 35-40 week. Weekend working was linked to a higher risk of depression among both sexes.

Women who worked for all or most weekends had 4.6% more depressive symptoms on average compared to women working only weekdays. Men who worked all or most weekends had 3.4% more depressive symptoms than men working only weekdays.

“This is an observational study, so although we cannot establish the exact causes, we do know many women face the additional burden of doing a larger share of domestic labour than men, leading to extensive total work hours, added time pressures and overwhelming responsibilities,” explained Gill Weston (UCL Institute of Epidemiology and Health Care), PhD candidate and lead author of the study.

“Additionally women who work most weekends tend to be concentrated in low-paid service sector jobs, which have been linked to higher levels of depression.”

The study showed that men tended to work longer hours in paid work than women, and having children affected men’s and women’s work patterns in different ways: while mothers tended to work fewer hours than women without children, fathers tended to work more hours than men without children.

Two thirds of men worked weekends, compared with half of women. Those who worked all or most weekends were more likely to be in low skilled work and to be less satisfied with their job and their earnings than those who only worked Monday to Friday or some weekends.

Researchers analysed data from the Understanding Society, the UK Household Longitudinal Study (UKHLS). This has been tracking the health and wellbeing of a representative sample of 40,000 households across the UK since 2009.

Information about working hours, weekend working, working conditions and psychological distress was collected from 11,215 working men and 12,188 working women between 2010 and 2012. Depressive symptoms such as feeling worthless or incapable were measured using a self-completed general health questionnaire.

“Women in general are more likely to be depressed than men, and this was no different in the study,” Weston said.

“Independent of their working patterns, we also found that workers with the most depressive symptoms were older, on lower incomes, smokers, in physically demanding jobs, and who were dissatisfied at work.”

She added: “We hope our findings will encourage employers and policy-makers to think about how to reduce the burdens and increase support for women who work long or irregular hours — without restricting their ability to work when they wish to.

“More sympathetic working practices could bring benefits both for workers and for employers — of both sexes.”

Continue Reading

News

Brain Response To Mom’s Voice Differs In Kids With Autism

Published

on

For most children, the sound of their mother’s voice triggers brain activity patterns distinct from those triggered by an unfamiliar voice. But the unique brain response to mom’s voice is greatly diminished in children with autism, according to a new study from the Stanford University School of Medicine.

The diminished response was seen on fMRI brain scans in face-processing regions and learning and memory centers, as well as in brain networks that process rewards and prioritize different stimuli as important.

The findings were published Feb. 26 in eLife.

“Kids with autism often tune out from the voices around them, and we haven’t known why,” said the study’s lead author, Dan Abrams, PhD, clinical assistant professor of psychiatry and behavioral sciences at Stanford.

“It’s still an open question how this contributes to their overall difficulties with social communication.”

The results suggest that the brains of children with autism are not wired to easily tune into mom’s voice, Abrams said. The study also found that the degree of social communication impairment in individual children with autism was correlated with the degree of abnormality in their brain responses to their mother’s voice.

“This study is giving us a handle on circuits and vocal stimuli that we have to make more engaging for a child with autism,” said the study’s senior author, Vinod Menon, PhD, the Rachael L. and Walter F. Nichols, MD, Professor and professor of psychiatry and behavioral sciences.

“We now have a template for targeting specific neural circuits with cognitive therapies.”

An important social cue

Mom’s voice is an important social cue for most children. For instance, tiny babies recognize and are soothed by their mother’s voice, while young teenagers are more comforted by words of reassurance spoken by their mothers than the same words sent by their mothers via text message, prior research has shown. The response to mom’s voice has a distinct brain-activation signature in children without autism, a 2016 paper co-authored by Abrams and Menon demonstrated.

Autism is a developmental disorder that affects 1 in 59  children. It is characterized by social and communication difficulties, restricted interests and repetitive behaviors. The disorder exists on a spectrum, with some children more impaired than others.

Mom’s voice is the primal cue for social and language communication and learning.

The new study included 42 children ages 7 to 12. Half had autism, and the other half didn’t. The children had their brains scanned using functional magnetic resonance imaging while listening to three different recorded sounds: their mother’s voice; the voices of unfamiliar women; and nonvocal environmental sounds. In the voice recordings, the women said nonsense words to avoid activating language comprehension regions in the brain.

The researchers compared patterns of brain activation and brain network connectivity between the two groups of children. They also asked the children to identify whether each brief (956-millisecond) voice recording they heard came from their mother or an unfamiliar woman. Children without autism correctly identified their mothers’ voices 97.5 percent of the time; those with autism identified their mothers’ voices 87.8 percent of the time, a statistically significant difference.

The brain response to unfamiliar voices, when compared with the response to environmental sounds, was fairly similar in children with and without autism, although those with autism had less activity in one area of the auditory association cortex.

When comparing the brain response to mom’s voice versus unfamiliar voices, children without autism had many more brain areas activated: Mom’s voice preferentially lit up part of the hippocampus, a learning and memory region, as well as face-processing regions. Brain-connectivity patterns measured in a network that included auditory-processing regions, reward-processing regions and regions that determine the importance, or salience, of incoming information also distinguished children with autism from children without autism. The network impairments in individual children with autism were also linked to their individual level of social communication impairment.

‘Really striking relationship’

“There is this really striking relationship between the strength of activity and connectivity in reward and salience regions during voice processing and children’s social communication activity,” Abrams said.

This suggests that brain responses to mom’s voice are a key element for building social communication ability, he added.

The findings support the social motivation theory of autism, which suggests that social interaction is intrinsically less engaging for children with the disorder than those without it.

Many current autism therapies involve motivating children to engage in specific types of social interaction. It would be interesting to conduct future studies to see whether these therapies change the brain characteristics uncovered in this study, the researchers said.

“Mom’s voice is the primal cue for social and language communication and learning,” Menon said.

“There is an underlying biological difference in the brain circuitry in autism, and this is a precision-learning signal we can target.”

Continue Reading

Like Us on Facebook

Trending Posts

Trending