Book Read Free

iGen

Page 32

by Jean M. Twenge


  When discussing the results in the text, I’ll sometimes refer to how large the change is—for example, if something goes from 10% to 20%, I’ll note that it doubled. When the shifts are less pronounced than that, I sometimes note the percentage increase—not the percentage points but the increase in terms of numbers. For example, an increase in agreement from 10% to 15% is a 50% increase, because it means 50% more people agree (15 – 10 = 5, and 5/10 = .50, or 50%). This is more informative than noting the number of percentage points (here, 5), which doesn’t tell us much about the change in terms of people.

  Age, Time Period, and Cohort

  In most of the analyses, I use these four data sets as time-lag studies, comparing people of the same age at different points in time. That means the differences cannot be due to age, as age is the same for everyone. For example, in the MtF 12th-grade survey the Boomers filling out the surveys in 1976 were 17- and 18-year-old high school seniors, and so were the GenX’ers filling them out in 1990, the Millennials in 2005, and the iGen’ers in 2015. Because these surveys have collected data over many decades, they can rule out effects of age, which is extremely helpful for drawing conclusions about what is cultural change and what is just being young.

  However, these types of data cannot separate the effects of generation (a change that affects only young people) from those of time period (a change that affects people of all ages equally). It could be that everyone, even older adults, is changing, not just young people. Nevertheless, both generation and time period capture cultural change—differences in behavior and attitudes over time. Separating the influences of age, generation, and time period is difficult, because each is the product of the other two. For example, if you will be 20 in 2020, you were born in 2000.

  I’ll make two points about this issue. First, practically speaking, it might not always matter much whether a change is due to generation or time period, because both indicate cultural change. The increase in smartphone use is a good example. When smartphones entered the scene in 2007, people of all ages started using them (a time-period effect), though young people adopted them more quickly than older people (a generational effect). Either way, teens were spending a lot of time on something during their formative years that older people had not experienced when they were teens. Everyone in the culture was experiencing the same technological shifts, but that doesn’t negate the profound shift in time use among teens from the 1990s to the 2010s. The same is true for teens’ spending time with their friends in person—this might be true for adults as well, but it doesn’t disprove the fact that iGen teens are getting much less face time with their friends than their parents did as teens—or even than Millennials did ten years ago.

  Second, sophisticated statistical techniques can now separate age, generation, and time period, so we can at least start to answer these questions. These techniques can be used only on data sets that include people of different ages over many years, such as the General Social Survey (GSS) of adults over 18. They can’t be used on the surveys of teens, since those don’t vary enough in age.

  My journal article coauthors Ryne Sherman (of Texas Tech University) and Nathan Carter (of the University of Georgia), both experts in advanced multivariate statistics, performed analyses on the GSS data using these new techniques (called APC or age-period-cohort analysis, it is based on hierarchical linear modeling. Cohort is another term for generation). Overall, the analyses suggested that attitude shifts—for example, more positive attitudes toward gays and lesbians—were often driven by time period. Changes in behaviors—such as sexual behavior—were often due to generation. Other changes were due to both.

  Here’s an example, using some of the research on changes in religious beliefs and practices presented in chapter 5. The source is the GSS survey, which has been administered every year or every other year from 1972 to 2016. The figures show the time-period and generational effects (controlled for each other and both controlled for age) on public religious commitment (which adds together going to religious services, affiliating with a religion, strength of religious affiliation, and confidence in religious institutions). They’re standardized so that the mean is 0 and the standard deviation is 1.

  Figure A.3. American adults’ public religious commitment, time-period effect, controlling for cohort/generation and age in an APC analysis. General Social Survey, 1972–2014.

  Figure A.4. American adults’ public religious commitment, cohort/generational effect, controlling for time period and age in an APC analysis. General Social Survey, 1972–2014.

  As you can see, these analyses showed both a time-period and a cohort/generational effect for declines in public religious commitment, with the time-period effect about twice as big. In other words, two things were going on: American adults of all ages were growing less religious over time, and later generations were less religious than previous ones.

  We’ve done APC analyses on only a minority of the characteristics I explore in this book, so I have not made them a central focus. Age is always constant in the figures showing yearly change, so that influence is already removed. But it should also be kept in mind that many of these changes have probably affected older people, too.

  Weighting, Moving Averages, Relative Centrality, and Relative Risk

  Most of the time, the numbers in the figures are unadjusted; they’re simply the average percentage or mean for that year in that population, straight from the data file. Sometimes, though, they’ve been adjusted to represent the population more clearly. Nearly all of these adjustments move the results by a percentage point or two, not enough to make much of a difference. For example, the surveys all have weight variables that can make the statistics more representative of the population in terms of demographic composition. Most of the time, using a weight vs. not doing so barely budges the results, so I’ve chosen to present most of the results, including those for Monitoring the Future, without them. The American Freshman data are reported already weighted—important for that survey, as colleges choose to participate and are thus not random; weighting makes the results representative of the population entering four-year colleges and universities for the first time. The General Social Survey samples only one person per household. That means that someone who lives alone is more likely to be in the survey than someone who lives with many people. That makes a difference, particularly for variables concerning religion or politics. For that reason, the GSS results presented here are weighted to correct for this.

  Another question is moving averages, which smooth year-by-year data so the general pattern of change can be seen more easily (they are often used in stock market graphs). Most of the figures here do not use moving averages—it’s sometimes interesting to see the ups and downs at individual years, and the large sample sizes of most of the surveys make the numbers reliable even within years. I have employed moving averages in two types of cases. First, the GSS, which is the only survey of adults and the only one asking some questions, samples only about 2,500 people per year (compared to about 15,000 for Monitoring the Future and about 200,000 for the American Freshman Survey). Since this book is about young people, that 2,500 is cut down further when you look at 18- to 29-year-olds or 18- to 24-year-olds. The sample sizes are still adequate, usually at least two hundred people per year, but the numbers fluctuate more because the sample sizes are lower. So some of the figures based on GSS data from young people use only moving averages, and therefore it’s easier to see the general trends. Second, I used moving averages for one MtF graph—the one on happiness, as that variable shows some ups and downs over time that get in the way of seeing the general trends. The figure with the unadjusted percentages looks similar, just messier.

  For the questions on what teens think is important in life (mostly in chapter 6), most figures are adjusted for a curious generational tic: recent generations are more likely to rate everything as more important. Researchers who study values, such as Tim Kasser of Knox College, recommend subtracting the average of all of
the values items from each. That adjusts for some people’s tendency to rate everything as important and others’ to rate few things as important. The adjusted numbers therefore capture how important someone thinks the value is relative to other values—thus the term relative centrality. Compared to the adjustments for weighting and moving averages, these corrections make more of a difference. Without them, for example, iGen high school seniors appear to value meaning and purpose in life more than Boomers did, but with the correction they value them less.

  Last, I decided to present data on the relationship between variables (such as in chapter 3) using relative risk instead of correlations. As a psychologist, I usually use correlations, a statistic that tells you the relationship between two continuous variables. Correlations can be positive (between 0 and 1) or negative (between –1 and 0). For example, temperature and ice cream sales are positively correlated (as one goes up, so does the other); temperature and articles of clothing worn are negatively correlated (as one goes up, the other goes down). However, correlations aren’t particularly intuitive—what does it actually mean for people if two things are correlated .20?—and even within the field there’s a lot of debate about how big a correlation needs to be to “matter.”

  Relative risk, often used in medical journals, instead tells you the increased (or decreased) chance of one thing happening given another thing. So, for example, a study might find that the relative risk for a poor diet on heart disease is 1.30, meaning that people who have a poor diet are 30% more likely to get heart disease. A relative risk of 1 is even chances, meaning the exposure doesn’t make a difference in the outcome. But what do you do when something has a protective effect—say, if people who exercise are less likely to get heart disease? Usually, negative relative risk is expressed as numbers lower than 1—for example, a relative risk of .70 for the effect of exercise on heart disease means that you’re 30% less likely to get heart disease if you exercise. It requires some math to arrive at this conclusion, however, so I have modified relative risk in the tables so that even chances are instead at 0. Continuing our example, that means the relative risk of a poor diet would appear as .30 and the relative risk of exercise as –.30. (This technique combines the features of relative risk and correlation.) That works very well for relative risks below 99%. It becomes a little more complex for relative risks over 2—on those charts, a doubling of risk will appear as 1 (corresponding to a 100% increase).

  There are two downsides to relative risk compared to correlations. First, relative risk uses dichotomous variables (those with only two outcomes, like percentages), so you lose some of the variation in responses that are better captured with correlations. Here that’s not too much of a problem since the correlations and the relative risk percentages point in the same direction and are similar in size. Second, relative risk is difficult to use when outside factors need to be controlled for. For those analyses, I instead used odds ratios, which are not as intuitive, though they are related to relative risk. Those analyses are reported in the text but are not shown in the charts. However, I thought that relative risk was still the better way to convey the effects of activities on happiness and mental health—for example, is a heavy user of social media more or less likely to be unhappy than a lighter user?

  Overall, these judgment calls on method made very small differences in the size of effects, and none of them changes the overall conclusions. If you’re interested in seeing more of the details, the papers on which these analyses are based are listed in the Notes.

  Appendix B

  * * *

  Chapter 1 Extra Stuff

  Going Out Without Parents

  The decline in going out without parents appears across all groups; here are the breakdowns by race and socioeconomic status.

  Figure B.1. Going out without parents, times per week, 12th graders, by race/ethnicity. Monitoring the Future, 1976–2015.

  Figure B.2. Going out without parents, times per week, 12th graders, by socioeconomic status (father’s education). Monitoring the Future, 1976–2015.

  Dating

  The decline in going out on dates was more pronounced for girls than for boys. Whereas girls once went out on dates more often than boys, now there is no difference by sex.

  Figure B.3. Going on dates, times per week, by sex, 12th graders. Monitoring the Future, 1976–2015.

  Driving

  The decline in 12th graders getting a driver’s license appears among teens living in rural, suburban, and urban areas. This suggests that it’s not due to ride-sharing services such as Uber or to differences in the availability of public transport.

  Figure B.4. Percentage of 12th graders with a driver’s license in rural, suburban, and urban areas. Monitoring the Future, 1976–2015.

  Working

  The number of hours a week teens spend at work has also been slashed. In the 1970s, 12th graders spent about fifteen hours a week in paid or unpaid work on average; in the 2010s, they spent about nine hours—thus six hours less (the question asks about paid and unpaid work for 12th graders and only paid work for everyone else). The peak of 10th graders’ work hours came in 1997, when they worked six hours a week on average; by 2015, the average 10th grader worked two and a half hours a week. These numbers include the now-large percentage who don’t work at all, but even among those who have a job teens are working fewer hours. In the late 1970s, the average 12th grader with a job worked about nineteen hours a week; by 2015, that number dropped to sixteen. Tenth graders with jobs in the late 1990s worked about thirteen hours a week, compared to ten and a half hours a week in 2015.

  Figure B.5. Hours per week spent working at a paid job (8th and 10th grades and college) and paid or unpaid job (12th graders). Monitoring the Future and the American Freshman Survey, 1976–2016.

  The decline in working at all was similar within socioeconomic statuses.

  Figure B.6. Percentage of 12th graders working for pay during the school year, by socioeconomic status (father’s education). Monitoring the Future, 1976–2015.

  Fewer teens worked during the summer as well.

  Figure B.7. Percentage of 16- to 19-year-olds employed in July. Bureau of Labor Statistics data analyzed by Challenger, Gray, and Christmas.

  What if teens are spending more time on school activities, and that’s why they don’t have time to work? However, that’s not the case. Time spent on extracurricular activities has not changed much; the only real increase is in volunteering, which increased by about eleven minutes a day over the entire time period and not at all since 2010. Participation in sports and exercise dropped by seven minutes a day since 2012.

  Figure B.8. Hours per week spent on sports, student clubs, and volunteer work by entering college students reporting on their last year in high school. American Freshman Survey, 1987–2015.

  Teens in 2015 spent less time on homework than their counterparts in the early 1990s (twenty-one, six, and five fewer minutes a day, for 8th, 10th, and 12th graders, respectively), with 12th graders heading to four-year colleges spending about the same amount of time. The pattern of change is curvilinear, however; homework time declined fairly steadily between the late 1980s and the mid-2000s and then increased again, though the increase did not make up for all of the loss. Between Millennial-era 2005 and iGen-era 2015, time spent doing homework decreased by eight minutes a day for 8th graders and increased six, four, and thirteen minutes a day for 10th graders, 12th graders, and 12th graders headed for college, respectively. These changes are too small to account for the larger decreases in working for pay—and working for pay has declined steadily, unlike the curvilinear pattern shown here.

  Figure B.9. Minutes a day spent on homework or studying, 8th, 10th, and 12th graders (Monitoring the Future) and entering college students reporting on their last year in high school (American Freshman Survey), 1976–2016.

  Money

  Just as with 12th graders, fewer 10th graders worked and fewer were given an allowance. Whereas nearly all 10th grader
s once had control of their own money (95% in the early 1990s), by 2015 only 72% did. That means more than one out of four 10th graders did not have money of their own to manage and spend.

  Figure B.10. Money from jobs, allowances, or either, 10th graders. Monitoring the Future, 1976–2015.

  Alcohol

  It’s not just ever trying alcohol that has decreased among teens; across two surveys, teens are also less likely to have drunk alcohol in the last month.

  Figure B.11. Percentage of 8th, 10th, and 12th graders and college students and young adults who have drunk alcohol in the last 30 days. Monitoring the Future, 1993–2016.

  Figure B.12. Percentage of 9th to 12th graders who have drunk alcohol in the last 30 days. Youth Risk Behavior Surveillance Survey, 1991–2015.

  Individualism and Independence

  It might seem paradoxical that young people would grow up more slowly in individualistic cultures—don’t they encourage independence? They do, but individualism is also linked to larger cultural forces, including economic prosperity, small families, and technologically advanced economies, all of which encourage a slow life strategy. There is a strong correlation between how individualistic a country is (measured by cross-cultural psychologists) and how quickly young adults in those countries achieve maturity in family roles (such as marriage and children) and work roles (such as completing education and beginning full-time work). That’s what the next two graphs show (see Figures B.13 and B.14).

 

‹ Prev