Popular wisdom drives home the importance of a head start and specializing early. Not so fast, advises journalist David Epstein.
After Epstein wrote about the famed 10,000-hour rule in his first book, The Sports Gene, he was invited to debate Malcolm Gladwell, whose book Outliers had brought the rule into the mainstream. To prepare for the debate, Epstein gathered studies that looked at the development of elite athletes and saw that the trend was not early specialization. Rather, in almost every sport there was a “sampling period” where athletes learned about their own abilities and interests. The athletes who delayed specialization were often better than their specialized peers, who plateaued at lower levels.
Epstein filed this information away until he was asked by the Pat Tillman Foundation to give a talk to a group of military veterans. These were people who were changing careers and having doubts that are familiar to many: whether they were doomed to always be behind because they hadn’t stuck to one thing. Epstein became interested in exploring the benefits of being a specialist versus a generalist, and ended up writing Range: Why Generalists Triumph in a Specialized World (Penguin Random House).
The Verge spoke to Epstein about “kind” versus “wicked” environments, the importance of doing instead of planning, and the difference between having range and being a dilettante.
This interview has been lightly edited for clarity.
When did this cult of specialization develop? You mention the 10,000-hour rule, of course, but it was definitely around before then.
The 10,000-hour rule just gave it more of a common language and intensified it, basically. I really do think that Tiger Woods himself [whom Epstein writes about in the beginning of the book] helped kick this into high gear when he went on television as a two-year-old. It started this explosion of parents thinking, I have to get my kid doing this. There’s a number of prodigies like this. It’s easy to conceptualize giving someone a head start. We give a lot of lip service to “try and fail,” but we don’t actually encourage it when the rubber meets the road.
I also think that sharing videos and movies of prodiges, whether that’s Searching for Bobby Fischer or the Polgár story or the Tiger story — and those things are always in classical music, chess, or golf because those domains are so amenable to that sort of thing — kicked off our natural tendencies to want to give kids a head start. And the 10,000-hour rule codified it and extrapolated it to every other domain, where it doesn’t necessarily belong.
Right. One of the key ideas of your book is that early specialization and lots of deliberate practice does work in certain “kind” environments, but it’s not as useful for succeeding in “wicked” environments where it might be better to be a generalist. Tell me more about this distinction.
Obviously, Tiger Woods went on to be the best golfer in the world. Nothing I wrote was meant to say this didn’t work, but the problem was extrapolating that to everything else that people want to do. Golf is what psychologist Robin Hogarth called a “kind environment” and Hogarth set out the spectrum from kind to wicked environments.
The most kind environment is one where all the information is totally available, you don’t have to search for it, patterns repeat, the possible situations are constrained so you’ll see the same sort of situations over and over, feedback on everything you do is both immediate and 100 percent accurate, there’s no human behavior involved other than your own. Whereas on the opposite side, with wicked environments, not all information is available when you have to make a decision. Typically you’re dealing with dynamic situations that involve other people and judgments, feedback is not automatic, and when you do have feedback it may be partial and it may be inaccurate.
Most of the things that people want to do are much more toward the wicked end of the spectrum than golf. You don’t know any of the rules, they’re subject to change without notice at any time, and over and over. Early specialization is not the best way to go.
You talk about the importance of a “sampling period.” But how long should this optimal sampling period last? Someone could theoretically just sample forever, but that doesn’t make sense either.
“We learn who we are in practice, not in theory.”
That’s the million-dollar question. Or billion-dollar question. In sports, it looks like athletes who want to be the best start cutting out other things in midteen years, around 15. But it’s not clear to me whether that’s the optimal way to do it or if it’s because they are forced to specialize then. We saw in a study of German soccer players, some of whom went on to play in the World Cup, that they were still doing other sports informally past the age of 22. With Cirque du Soleil, they started having their performers learn the basics of other performers’ disciplines — not because they were going to perform it, but because it helped them be more creative for designing new shows and that cut injury rates by a third. But I don’t think we know exactly what would be optimal.
In addition to discussing the advantages of being a generalist, the book also touches on the disadvantages of specialization and how that can blind us. Do you have a favorite example of this that you like to talk about?
Part of what got me interested in this is that I realized I had committed statistical malpractice in my own master’s degree. I can’t say I “love” talking about it, it’s a little embarrassing. And that research is still published.
The problem is, when I went to grad school, in geological sciences, I was rushed into this very specialized research before I’d even been taught how the tools of thinking and science work. So I was studying very specific information, not knowing what was happening when I was pressing the computer buttons and getting statistically significant results, and publishing them and getting a master’s degree for that.
There’s a replication crisis in science and a huge amount of it is exactly what I was doing: people not thinking about how their statistics work. You can get big enough data now and there are powerful statistical programs, so you don’t have to know how scientific thinking works. I think that’s a huge problem.
At one point in the book, you relay the advice “first act, and then think,” which runs counter to much of what we’re told about the importance of planning. Can you unpack that a bit?
That’s from Herminia Ibarra, an organizational behavioral specialist who studies career transitions. She gave me one of my favorite phrases that’s related to the “act and think” one. It’s “We learn who we are in practice, not in theory.” There’s a lot of research that shows we can take personality quizzes and everything, but our insight into ourselves is restricted. It’s similar to the end-of-history illusion, where we recognize that we’ve changed a lot in the past, but think we will not change much in the future, but we’re wrong at every single stage.
This industry attempts to just give you a quiz or give commencement advice to sit down and introspect and think about what you want to do, and it’s really contradictory to what we know about how people develop and how personality develops over time. We have to actually do stuff and reflect on it. And that’s how we learn about our skills and interest and possibility for the world, as opposed to having a theory of ourselves and assuming. Try stuff and take time to reflect. The best learners have the trait of reflecting on things they’ve done, because they’re learning about who they are.
Another thing that is interesting is the idea of having intellectual range and taking in lots of information. How does that dovetail with all the research on cognitive biases that keep us from believing information that contradicts our beliefs? How do we have intellectual range?
It’s really difficult. Algorithms reinforce that and unless you stop yourself and realize what’s being done to you, you don’t think about it. That’s partly what chapter 10 is about. You look at people who develop good judgment about the world, and people who were really specialized and had a narrow focus actually got worse as they accumulated information because they were better able to fit any story or whatever their views were. One of the main traits of people who had better judgment and were able to avoid falling into their own cognitive biases all the time was a trait called “science curiosity.”
“How do you capture the benefits of range even at the point in your career where you have specialized to a certain degree?”
There were clever studies where people were given statistics to analyze and sometimes it’s just some bland clinical trial of skin cream and other times it’s something very political, like whether gun control reduces deaths. Numerate people often become innumerate when they’re faced with those numbers in that context. It’s not an issue of bright — they could interpret the numbers well when it wasn’t political. The people who bucked the trend were the ones who were highest in science curiosity. Not science knowledge, but science curiosity measured by the fact that when they were faced with information that didn’t agree with their preconceived information, would they follow up and research broadly or would they put that aside and ignore it and leave it there?
So I think we have to very proactively try to step outside the algorithm and do the opposite of what our inclination is. We should see if we can falsify our notions. That was a hallmark of what the people with the best judgment do. It ends up with them widely gathering a large array of sources to try and test their own ideas. You can get so much positive feedback for not doing that as long as you stick to your little corner of the universe.
For those of us who stick in the same field, is it possible to be a generalist and a specialist?
At the end I focused on scientists and scientific research. Scientists, to the outside world, are the epitome of specialization in one sense, and I wanted to make sure that I included people who were viewed that way. Among these people who, compared to the population at large, are very specialized, what does it mean to have range? How do you capture the benefits of range even at the point in your career where you have specialized to a certain degree?
So I looked at people like Andre Geim [who has won both the Nobel Prize and Ig Nobel Prize, which is given to “trivial” research]. I called the guy who started the Ig Nobel and they tell people ahead of time so they can decide if they’d rather not have it and turn it down. But I think Geim was proud of it, basically. He talks about how “it’s psychologically unsettling to change what I do every five years, but that’s how I make my most important discoveries.” He says he doesn’t do research, only “search.” I enjoyed that because we all specialize to one degree or another and the question is how we capture the benefits.
I loved your book and have recommended it to several friends because it speaks to worries that so many people have, like “Am I just lazy because I can’t stick to one thing?” To an extent, the book helps assuage some of those fears, but I couldn’t help wondering: when do you have range and when are you a dilettante?
I don’t think it’s great to give advice like, “Don’t worry about being interested or hard-working at anything.” I like to think of the study of inventors at 3M. They identified generalist and specialist inventors, but there was also a class of inventors who didn’t have that much breadth and who didn’t have that much depth. They didn’t tend to make contributions. They were the dilettantes. They flitted between things to some extent, but didn’t learn about as many different technology classes as the generalists. But also tended not to go deep in any particular technology, so they ended up without an intellectual home, but also without the ability to connect disparate domains in a novel way.
I think that’s symbolic of the difference. You have to give a real effort and be ravenously curious about things because part of the hunt for what economists call “match quality” is diving into things in a way that gives you maximum signal about yourself. And if you’re superficially on the surface, I don’t think you’re getting the signal that helps you find where you are in the world.
]]>Voice is what makes artificial intelligence come to life, says writer James Vlahos. It’s an “imagination-stirring” aspect of technology, one that has been part of stories and science-fiction for a long time. And now, Vlahos argues, it’s poised to change everything.
Vlahos is the author of Talk to Me: How Voice Computing Will Transform the Way We Live, Work, and Think (Houghton Mifflin Harcourt). It’s already the case that home assistants can talk and show personality — and as this technology develops, it’ll bring a host of questions that we haven’t reckoned with before.
The Verge spoke to Vlahos about the science of voice computing, which people will benefit most, and what this means for the power of Big Tech.
This interview has been lightly edited for clarity.
What exactly is happening when you talk to a gadget like Alexa and it talks back?
If you’re just used to talking to Siri or Alexa and you say something and hear something back, it feels like one process is taking place. But you should really think about it as multiple things, each of which is complex to pull off.
First, the sound waves of your voice have to be converted into words, so that’s automatic speech recognition, or ASR. Those words then have to be interpreted by the computer to figure out the meaning, and that’s NLU, or natural language understanding. If the meaning has been understood in some way, then the computer has to figure out something to say back, so that’s NLG, or natural language generation. Once this response has been formulated, there’s speech synthesis, so that’s taking words inside a computer and converting them back into sound.
Each of these things is very difficult. It’s not as simple as the computer looking up a word in a dictionary and figuring things out. The computer has to get some things about how the world and people work to be able to respond.
Are there any really exciting advances in this area that piqued your curiosity?
There’s a lot of really interesting work being done in natural language generation where neural networks are crafting original things for the computer to say. They’re not just grabbing prescripted words, they’re doing so after being trained on huge volumes of human speech — movie subtitles and Reddit threads and such. They’re learning the style of how people communicate and the types of things person B might say after person A. So, the computer being creative to a degree, that got my attention.
What’s the ultimate goal of this? What will it look like when voice computing is ubiquitous?
The big opportunity is for the computers and phones that we’re using now to really fade in their primacy and importance in our technological lives, and for computers to sort of disappear. You have a need for information and want to get something done, you just speak and computers do your bidding.
That’s a huge shift. We’ve always been toolmakers and tool users. There are always things we hold or grab or touch or swipe. So when you imagine that all just fading away and your computing power is effectively invisible because we’re speaking to tiny embedded microphones in the environment that are connected to the cloud — that’s a profound shift.
A second big one is that we are starting to have relationships with computers. People like their phones, but you don’t treat it as a person, per se. We’re in the era where we start to treat computers as beings. They exhibit emotions to a degree and they have personalities. They have dislikes, we look to them for companionship. These are new types of things you don’t expect from your toaster oven or microwave or smartphone.
Who might benefit the most from the rise of voice assistants? The elderly is one group that we often hear about — especially because they can have poor eyesight and find it easier to talk. Who else?
The elderly and kids are really the guinea pigs for voice computing and personified AI. Elderly people have the issue often of being alone a lot, so they are the ones that might be more likely to turn to chitchat with Alexa. There are also applications out there where voice AI is used almost as a babysitter, giving medication reminders or letting family members do remote check-ins.
Though, and not to way overgeneralize, some older people have dementia and it’s a little bit harder to recognize that the computer is not actually alive. Similarly, for kids, their grasp of reality is not so firm so they are arguably more willing to engage with these personified AIs as if they were really alive in some way. You also see the voice AIs being used as virtual babysitters, like, I’m not at home but the AI can watch out. That’s not totally happening yet, but it seems to be close to happening in some ways.
What will happen when we get virtual babysitters and such and all the technology fades into the background?
The dark scenario is that we seek out human companionship less because we can turn to our digital friends instead. There’s already data pouring into Amazon that people are turning to Alexa for company and chat and small talk.
But you can spin that in a positive way and I sometimes do. It’s a good thing that we’re making machines more human-like. Like it or not, we spend a lot of time in front of our computer. If that interaction becomes more natural and less about pointing and clicking and swiping, then we’re moving in the direction of being more authentic and human, versus us having to make ourselves like quasi-machines as we interact with devices.
And I think we’re going to hand more centralized authority to Big Tech. Especially when it comes to something like internet search, we are less likely to browse around, find the information we want, synthesize it, open magazines, open books, whatever it is we do to get information versus just asking questions of our voice AI oracles. It’s really convenient to be able to do that, but also we give even greater trust and authority to a company like Google to tell us what is true.
How different is that scenario from the current worry about “fake news” and misinformation?
With voice assistants, it’s not practical or desirable for them to, when you ask them a question, give you the verbal equivalent of 10 blue links. So Google has to choose which answer to give you. Right there, they’re getting enormous gatekeeper power to select what information is presented, and history has shown that if you consolidate the control of information very highly in a single entity’s hands, that’s rarely good for democracy.
Right now, the conversation is very centered on fake news. With voice assistants, we’re going to skew in a different direction. Google’s going to have to really focus on not presenting [fake news]. If you’re only presenting one answer, it better not be junk. I think the conversation is going to more turn toward censorship. Why do they get to choose what is deemed to be fact?
How much should we worry about privacy and the types of analyses that can be done with voice?
I am equally worried about privacy implications as I am with just smartphones in general. If tech companies are abusing that access to my home, they can do it equally with my computer as they can do it with Alexa sitting across the room,
That’s not at all to play down privacy concerns. I think they’re very, very real. I think it’s unfair to single out voice devices as being worse. Though there is the sense that we’re using them in different settings, in the kitchen and living room.
Switching topics a little bit, your book spends some time discussing the personalities of various voice assistants. How important is it to companies that their products have personality?
Personality is important. That’s definitely key, otherwise why do voice at all? If you want pure efficiency, you might be better off with a phone or desktop. What hasn’t happened heavily yet is differentiation around the edges between Cortana, Alexa, Siri. We’re not seeing tech companies design vastly different personalities with an idea toward capturing different slices of the market. They’re not doing what cable television or Netflix do where you have all these different shows that are slicing and dicing the consumer landscape.
“The big opportunity is for the computers and phones that we’re using now to really fade in their primacy and importance in our technological lives and for computers to sort of disappear.”
My prediction is that we will do that in the future. Right now, Google and Amazon and Apple just want to be liked by the most number of people so they’re going pretty broad, but [I think they will develop] the technology so my assistant is not the same as your assistant is not the same as your co-worker’s assistant. I think they’ll do that because it would be appealing. With every other product in our lives we don’t have a one-size-fits-all, so I don’t see why we would do that with voice assistants.
There’s some trickiness there, though, as we see in discussions around why assistants tend to have female voices. Is more of that in store?
We’re seeing questions already about issues relating to gender. There’s been very little conversation about the issue of race or perceived race of virtual assistants, but I have a sense that that conversation is coming. It’s funny. When you press the big tech companies on this issue, except for Amazon who admits Alexa is female, everyone else is like “it’s an AI, it doesn’t have a gender.” That’s not going to stop people from perceiving clues about what sort of gender or race identity it’s going to have.
All this to say, Big Tech is going to have to be really careful to negotiate those waters. They might want to specialize a little more, but they might get into dangerous waters where they do something that sounds like cultural appropriation, or something that is just off, or stereotypical.
]]>“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.
So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?
Gray ended up collaborating with fellow MSR senior researcher Siddharth Suri to write Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Houghton Mifflin Harcourt).
The Verge spoke to Gray about her research findings and what they mean for the future of employment.
This interview has been lightly edited for clarity.
Labeling data to feed to algorithms is one obvious example of ghost work. Content moderation is another. What are other examples?
Filling out surveys, captioning and translation work, any sort of transcription service. Doing web research, verifying location addresses, beta testing, user testing for user designs. Anything you can think of as knowledge work, like content creation, writing editorial, doing design. You name it. The list is endless. All of those are tasks that can be distributed online. It’s all of the things we’re used to seeing in the office, and this is what it looks like to dismantle that as a full-time job and turn it into projects for myriad people.
Am I right in thinking that basically every tech company relies or has relied on ghost work?
I would be hard-pressed to find any business that sells itself as AI that either didn’t deeply rely on ghost work to generate their basic product or isn’t very much reliant on it today. There are so many startups and businesses out there, anything that calls itself “business insights” or “intelligence and analytics.” That’s using crowdsourcing or collective intelligence, and that’s relying on ghost work. There is no way around the need for people to sift through the piles of what’s called unstructured data.
Sometimes, people think that as technology gets better, we won’t need this type of ghost work anymore. But you write that “the great paradox of AI is that the desire to eliminate human work generates new tasks for humans.” So clearly you don’t subscribe to that belief. Why not?
What might change are the specific tasks. Believing that AI will never need humans labeling data means believing that language will never change, style will never change. Service industries, especially, are so difficult to fully automate because being able to listen to somebody’s voice and register that person’s silent anger is such a human capacity. So there are cases when AI will, I argue, always fall short.
Engineers are always wonderfully optimistic about opportunities. As an anthropologist, I know how complicated it is to think cross-culturally about these questions. Even if we fairly reliably get to 100 percent of spoken English with a flat Midwestern accent, what about when you move into vernacular and slang and folks who will splice together languages and code switch? Anytime you see an autotranslation of a talk, you see the places where language breaks down, often around somebody’s name.
“I hate the parallel to horsepower. This is not like replacing horses with automobiles.”
Those are the kinds of computational problems that are intractably hard for AI to capture because there’s not enough data consistently available to model what’s going to be the next utterance that somebody is saying using Spanglish. We’ve already effectively automated all of the easy things.
One interesting thing you mention is that we don’t have good labor statistics for how many people are doing ghost work. Why is that?
The biggest challenge is that the ways we count jobs are often in relationship to professional identities, or really clearly defined capacities or skills, and no one is oriented to a world of work that is project-based. We don’t have the language to describe an image tagger or a captionist. One of the findings in our research is that people have really different mental models. They may or may not identify as self-employed. They may or may not identify as a journalist if they write for a content farm, and that might change whether they decide to answer a survey question to help us measure this workforce. Let alone the fact that ghost work is distributed around the globe, and there is no global bureau of labor statistics.
A key question in this book is: who are the people doing ghost work? So, who are they? It sounds like they could be almost anyone.
When we got our initial set of surveys back from the four different platforms we studied, it was clear that there were as many women as men, though they worked different hours. People had college educations, but that wasn’t surprising because that maps on to knowledge work and information services broadly.
They are all of us. These are the folks who, for reasons of social capital, don’t have access to a network that was going to boost them into the full-time job. That’s the pattern I see sociologically or anthropologically. They’re first-generation college-going more often than not. This is a group of people who don’t have strong social ties to elites.
What are people’s motivations for this work?
There’s not one type of person doing this work and not one single motivation. There is a core group of people who are turning to this work, often because of other constraints on their time. People would say that they don’t have time to commute and were going to be commuting for a comparably paid job at least two hours, and that was going to cut into the amount of money they could make. That’s the calculus they’re making here. So they’re deciding to turn to this work and effectively. Once they’ve figured out how to make enough money on enough platforms, they cobble together the equivalent of a full-time salary for them to meet their needs. We call those folks “always on,” and they’re turning this into full-time work by the number of income streams. But this group of people is a small percentage — 10 to 15 percent, depending on the platform. This is what the research tells us about all these platforms. The core group of people is doing the bulk of the work.
Then there are the “regulars,” a deep trench of people who can step in at any time. The regulars are the ones that enable the “always on” people because if the “always on” steps out, there are enough people in that pile of regulars who are going to be able to step in. They’re often caregivers, and they had other motivations; they were pursuing another passion project or they were going back to education and taking courses, and this gave them a means to be able to finance that.
Lastly, there is the long tail of experimentalists, which is the name we gave the people who try one or two projects, figure out that this is not for them, and leave. The most important part of doing anthropological work is we could meet the people who left and figure out why. And it had to do with never hooking into a community of peers to help lower their costs, feeling like they don’t have enough support, and that this was too difficult to figure out. And it was exhausting cognitively.
A feature of this kind of market is that anyone can work for anyone else. What happens in that kind of environment?
For anybody who becomes a regular or “always on,” they’re invested and bring the same framework they have to any job. It’s an amazing amount of self-policing because workers are invested in making sure that work comes back to the pool. They want to make sure their peers are doing well because, if not, that could work against their interests in getting the next job.
“Stop making our lives wrap around work, and start making work serve our lives.”
Businesses should be equally invested in this accountability in the supply chain. If they’re relying on lowering their costs of investing and what they need most is somebody who’s ready and willing and able to jump in for a project, the exchange is to create some mechanism that ensures that anybody who is entering is refreshed and has the opportunity to keep up. Otherwise, it’s not sustainable as a labor market.
But companies aren’t doing that. They’re not creating the accountability or trust or culture that would help the ghost workers.
If you talk to any of these companies, most of them believe that we’re going to get this automated and think, “I just need these people for a little while.” That’s precisely our problem and that’s historically been our problem since the Industrial Age: treating badly people who do the contingent work that can’t quite be automated. We stop paying attention to these people and their work conditions, we start treating them as something that can be replaced eventually, and we don’t value the fact that they’re doing something that a mechanical process or computational process can’t do.
I hate the parallel to horsepower. This is not like replacing horses with automobiles. People are not performing a mechanical task. They’re extending something distinct about humans — their creativity and their interpretation.
What should we do to address this? What are the policy suggestions?
At the very least, it means valuing everybody’s contribution. The first step is being able to identify the people who have contributed. In Bangladesh, it made a huge difference in textiles when companies selling products had to tell us who was involved in making the shirt on my back. There should be a clear record thanking anybody who contributes labor to an output or service. The consumer should always be able to trace back the supply chain of people who have had a hand in helping them achieve their goals.
This is about regulating a form of employment that does not fit in full-time employment or fully in part-time employment or even clearly in self-employment. I believe that this is the moment to say the classification of employment no longer functions. Anybody who’s working age should have a baseline of provisions that are supplied by companies.
If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.
I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.
]]>There is no lack of parenting advice available, and the sea of tips from well-meaning relatives and internet strangers alike can leave parents looking to scientific evidence for guidance.
The problem is that not all evidence is equal. There are plenty of studies out there, but a tiny study funded by a formula company, for example, should not be taken as seriously as a large randomized trial conducted by a government agency. “It is not super straightforward to evaluate all of this literature,” says Emily Oster, a health economist at Brown University and the author of Cribsheet: A Data-Driven Guide to Better, More Relaxed Parenting from Birth to Preschool, out now from Penguin Press. (Oster’s previous book, Expecting Better, was a data-driven guide to pregnancy.) To understand what the data really suggests when it comes to sleep training, toddler discipline, or language development, “it goes beyond just reading the papers,” Oster says. It requires evaluating which papers are good, which types of evidence are strong, and drawing conclusions — which are all skills that Oster uses as an academic economist.
The Verge spoke to Oster about the gaps in parenting literature, how to evaluate research, and the best advice she’s ever received.
This interview has been lightly edited for clarity.
How big is the academic literature on birth and child-raising relative to other areas of health? My sense has long been that topics related to women’s health are understudied. Was that the case for literature on parenting?
I think it’s hard to tell. There certainly is a lot of literature on pregnancy and moms and kids, and there’s also a lot of literature on a lot of other things in science. What I will certainly say is that there are ways in which the literature on this is frustrating, and there are some very important questions about which we don’t seem to know as much as I would hope.
One place that clearly comes up is screen time. A lot of people would like to understand the impact of screen time on little kids, particularly on the kind of questions that modern parents ask, like “Is it okay for my kid to play iPad games?” That’s something where we simply don’t have the kind of evidence we would need, partly because people differ in their choices about what to let their kids do on screen, but also because the modern version of screens is very different from the version even 15 years ago. So even if you wanted to ask, “What is the impact of this kind of exposure on kids who are graduating from high school now?” we aren’t yet ready to answer those questions. It’s an inevitable limitation, but it’s frustrating for those of us who are trying to make these choices.
On the other hand, what are some areas in which scientific literature has a lot of strong, good evidence? Vaccination seems to be one. What are some others?
Vaccination is one area. Although the literature is very new, the literature on allergies and food is very good. There aren’t a million studies, but the stuff that we have is very compelling and high-quality. A third place I would highlight is the sleep training stuff. Relative to a lot of these other hot-button first-year topics, the evidence there is more susceptible to randomized controlled trials, which are the gold standard of evidence in studies.
Can you tell me more about the types of data that are considered “high-quality” versus the studies that might not be as strong?
The baseline approach that most of the literature takes is to compare outcomes. Let’s take breastfeeding as a good example. You can compare IQ for kids who are breastfed and kids who are not breastfed and try to draw some conclusions. The issue is that, in a lot of settings, the kind of people who are breastfeeding are different from the kind of people who are not, and those differences may well be in part, or entirely, what is responsible for the difference in kids’ outcomes, not the breastfeeding itself. So when we look at these kinds of studies, we want to look for places where we avoid that problem as much as possible.
One way to do that is to try to really carefully control for differences across families. That’s pretty hard to do, though, in some cases, you see siblings who were treated differently, and that tends to be better because they’ll, for example, have the same mom so you’ll at least hold constant the mom. The other thing we can do is randomized trials where instead of just comparing kids who are treated differently based on the choices parents make, you can encourage families to make one choice or another and later compare the kids. Because you’re choosing randomly what you encourage, the differences are more plausibly associated with the treatment and not differences across families.
Since you mentioned breastfeeding, which is a famously hot-button topic, can you tell me a little about what your analysis of the data showed us about its benefits?
There’s a lot of emphasis on all of the many benefits of breastfeeding, and these fall into multiple categories: benefits to the mom, benefits to the baby in the short-term, benefits to the baby in the long-term. A lot of the evidence is biased or problematic because, as we said, the choice to breastfeed is not random. So we have to look at better kinds of evidence. There’s one very large randomized trial and a bunch of studies that compare siblings and do a better job adjusting, and when you look at those pieces of evidence, what comes out is that there are some real benefits for the baby early on — there may be some improvements in digestion, lower allergy risk, a reduction in ear infections — and some long-term benefits to the mom in terms of breast cancer reduction. But some of those claims about later health benefits for kids — like higher IQ, lower obesity, less asthma — don’t seem to be borne out in the best data. So the picture ends up a bit more mixed. It says breastfeeding is great, and we should help people do that if they want to, but we may want to dial back some of the more grandiose claims.
One thing I liked about the book is how you not only go over the data, but you also talk about economic concepts, like how to think marginally in the case of choosing child care and comparing cost. Can you unpack that a bit?
Thinking marginally is a core component of economics. Optimizing — doing the best thing — requires thinking about the opportunity cost and things on the margin. So in this example, when you’re trying to decide what the right thing is to do with child care and evaluating options that cost different amounts, it’s easy to think just about money as money. Just dollars.
But I would argue that, in fact, you need to conceptualize what you will buy with the money, and what is the value of that? And that’s where thinking about the marginal, or next-best use of the money is really important. Sometimes confronting the next-best use of the money can cause you to make a different choice. You realize that the next-best use would be a new car, but I don’t care that much about having a new car this year, and I actually would rather have that child care choice.
I also liked how you added nuance to some stories that we’ve heard, like how marital satisfaction declines after having children. That does seem to be true, but it doesn’t decline identically for all couples, right?
So it’s true that, on average, marital happiness declines after you have kids, but those declines are smaller if the kid is planned and if you’re happier beforehand. It’s not uniform that everybody hates their spouse after they have kids, although I think it is good to recognize that the first year of your kid’s life is likely to be a time when most marriages do feel some stress. That is something you may have to work through, and a lot of people, of course, do come out on the other side.
You write that, as kids get older, it gets more and more difficult to find studies because the data gets way more complicated. How so?
In general, even very good studies are typically only able to figure out an average effect. We’re able to say, on average, did the kid sleep better after we did this kind of sleep training? For little kids, that’s often a well-posed question, but as you get to older kids, there’s so much difference across kids in how they’re going to react to things.
A question a lot of people ask me is something like, is a Montessori school a good idea? On average, we don’t have that much data, and it could easily be the case that a Montessori school is really good for some kids and really not good for other kids. Even a very good study of Montessori could mask some really good effects for some kids and really bad effects for some and our data is even less well-suited to pick up that kind of heterogeneity. That problem only gets worse as kids get older and parents are choosing things that they think will work for their kid. There’s less data, and the reason is partly that it’s so hard to imagine doing these studies well.
The book ends with you talking about the most helpful parenting advice you’ve received. You were taking your daughter to France and were really worried about her getting stung by a bee and being allergic and something bad happening. And your doctor was like, “I’d probably just try not to think about that.” I liked that a lot. Why do you consider that to be the best advice after diving into so much data and papers?
There’s a sense in which, for me, that is very important to hear. I think about that advice all the time because it’s pretty broadly applicable to a lot of things in parenting. We can get caught up in every tiny decision and miss the enjoyment of parenting and the part of this that’s supposed to be fun. It just pushed against some of my worse instincts as a parent to just obsess over everything. Sometimes you just have to accept that you cannot control everything. That’s hard, but it’s part of the fun. Also, the kid was eventually stung by a bee, and it was totally fine.
]]>Rates of obesity worldwide have nearly tripled since 1975, and the prevailing belief is that city living is to blame. But a major study that covers 112 million adults suggests that weight gain in rural areas is responsible for much of this increase.
Members of the NCD Risk Factor Collaboration — an international group of health scientists — analyzed over 2,000 studies of how body mass index (BMI) has changed around the world from 1985 to 2017. (BMI is a height-to-weight ratio that is a popular measure of obesity, though not without its flaws.) The results, published today in the journal Nature, show that during this time period, more than 55 percent of the rise in BMI globally came from rural populations — specifically rural populations in emerging economies, which include many places in Latin America, Asia, and the Middle East. “In the world as a whole, BMI has been going up faster in rural areas than in urban areas,” study co-author Majid Ezzati, an expert in global environmental health at Imperial College London, said in a press briefing.
This directly contradicts the belief that people living in rural areas are less likely to gain weight because they eat healthier, unprocessed foods and do more physical labor. That may have been the case, continued Ezzati, but as rural areas industrialize, life starts to change. People don’t need to walk to fetch water because they have running water. They don’t need to walk to other places because roads are being built and cars are more common. These changes bring a lot of health benefits, added Ezzati, “but they also mean less moving around and less physical labor.”
As rural areas become wealthier, people living there can afford more food and, often, less healthy food. This means they’re eating the same processed foods as their urban counterparts, without the other benefits of city living that make it easier to be physically active. Notably, there are more sports facilities and gyms in urban areas, and far more opportunities to walk. Everything might be further away in the countryside, but that leads to people driving from one place to another. Rural areas also have higher rates of preventable deaths, due to limited access to health care.
The data from today’s study confirms that, in wealthy and industrialized countries, people living in rural areas have long had higher BMI and higher rates of obesity than those living in cities. (This finding holds in the US, according to the Centers for Disease Control and Prevention.) It’s just that we’re starting to see this trend develop in lower-income countries as well. Many global health efforts focus on malnutrition, but perhaps it’s time to shift the focus to getting high-quality calories and moving more.
Sherry Pagoto, a professor of health sciences at the University of Connecticut, says certain disparities can make it harder for rural populations to access this type of education or care. Rural and lower-income areas are less likely to have internet access, for example, which makes it harder to communicate educational information. She would like to see obesity initiatives that are specifically tailored for either rural or urban populations. Groups like the Robert Wood Johnson Foundation are trying to solve problems in rural areas by integrating health education with local institutions, like in churches or community centers, she says. “We have to think outside the box a little,” she says. “How do you leverage what’s there in order to solve that problem more quickly?”
]]>
By scanning the brains of adults who played Pokémon as kids, researchers learned that this group of people have a brain region that responds more to the cartoon characters than to other pictures. More importantly, this charming research method has given us new insight into how the brain organizes visual information.
For the study, published today in the journal Nature Human Behavior, researchers recruited 11 adults who were “experienced” Pokémon players — meaning they began playing between the ages of five and eight, continued for a while, and then played again as adults — and 11 novices. First, they tested all of the participants on the names of pokémon to make sure the pros actually could tell a Clefairy from a Chansey. Next, they scanned the participants’ brains while showing them images of all 150 original pokémon (in rounds of eight) alongside other images, like animals, faces, cars, words, corridors, and other cartoons. In experienced players, a specific region responded more to the pokémon than to these other images. For novices, this region — which is called the occipitotemporal sulcus and often processes animal images — didn’t show a preference for pokémon.
It’s not that surprising that playing many hours of Pokémon as a kid would lead to brain changes; looking at almost anything for long enough will do the same thing. We already know that the brain has cell clusters that respond to certain images, and there’s even one for recognizing Jennifer Aniston. The bigger mystery is how the brain learns to recognize different images. What predicts which part of the brain will respond? Does the brain categorize images (and therefore develop these regions) based on how animated or still they are? Is it based on how round or linear an object is?
The usual way to investigate this is to teach children (whose brains are still developing) to recognize a new visual stimulus and then see which brain region reacts. Study co-author Jesse Gomez, a postdoctoral fellow in psychology at the University of California at Berkeley, was inspired by this type of research on monkeys. But “it seems a little bit unethical to have a kid come in and trap them for eight hours a day and have them learn a new visual stimulus,” Gomez says. Teaching a new visual stimulus is a carefully controlled process. To make sure that you get clean data, you need to show all subjects the same picture with the same brightness and viewed from the same distance, and you need to show it over and over again.
The project seemed like a pipe dream until Gomez realized that pokémon, specifically the kind from the Game Boy games from the 1990s, would be perfect for this task. “I spent almost as much time playing that game as I did reading and stuff, at least for a couple of years when I was six and seven,” he says. For this generation, everyone saw the same images (black-and-white pokémon that didn’t move), and most people held the Game Boy about a foot away from their face, making this an ideal experiment.
The results support a theory called “eccentricity bias,” which suggests that the size of the images we’re looking at and whether we’re looking at it with central or peripheral vision will predict which area of the brain will respond. This particular region is associated with people looking directly at an image. Since nobody spent hours as a kid playing Pokémon on their Game Boy using just their peripheral vision, the theory checked out.
This isn’t the first time Gomez has studied the brain using pokémon. He’s also done scans of kids looking at pokémon, and he says that similar methods could be used when it comes to sound. When pokémon appear, they make a certain sound, and Gomez thinks it might be interesting to see whether there’s a “pokémon region” in the auditory part of the brain, too.
]]>People with depression listen to sad music because it makes them feel better, according to a small study that is one of the first to investigate why people turn to tearjerkers when they’re already down.
The first part of the study, published recently in the journal Emotion, tried to repeat the findings of a 2015 study that showed that depressed people preferred listening to sad music. Researchers at the University of South Florida asked 76 female undergrads (half of them were diagnosed with depression) to listen to various classical music clips. “Happy” music included Jacques Offenbach’s cheerful “Infernal Gallop,” and “sad” music included Samuel Barber’s “Adagio for Strings,” which is almost universally considered to be extremely depressing. The scientists found that, like in the 2015 study, participants with depression indicated they would rather listen to sad music than happy music.
Then, the researchers gave the participants new clips of happy and sad instrumental music and asked them to describe how the tracks made them feel. Again, the depressed participants preferred the sad music, but they also stated that the sad music made them feel happier. “They actually were feeling better after listening to this sad music than they were before,” study co-author Jonathan Rottenberg told WUSF News. It seemed to have relaxing and calming effects. This challenges the assumption that sad people listen to sad music to make themselves feel worse, when, in fact, it may be a coping mechanism.
Of course, there are many limitations. This is a small study that only looked at female undergraduates, so the results should be taken with a grain of salt. (Psychology, in general, tends to use WEIRD — Western, educated, industrialized, rich, democratic — subjects too often.) We don’t have a lot of detail regarding exactly why people with depression prefer sad music, and we don’t know how results might change with happy and sad music that has words.
However, it’s an intriguing finding that does replicate some earlier research and could have implications for fields such as music therapy. In this intervention, trained music therapists incorporate music into their interactions with patients by singing, listening to music, or playing music together. It has been used for everything from pain relief to helping cancer patients, and a 2017 Cochrane review of the evidence suggested that it had at least short-term benefits for patients with depression. Though there is no “most common” type of music used in music therapy, the programs can often include instruments like guitars and drums. In the future, maybe there will be a greater focus on sorrowful songs.
]]>Despite the growing alarm about the harmful effects of sitting too much, Americans are sitting more than in the past — in part because people are spending more leisure time in front of the computers.
The association between lots of sitting and bad health is now well-established, but there hasn’t been a lot of data on how sedentary Americans actually are, says Yin Cao, a cancer epidemiologist at Washington University in St. Louis and co-author of a study published today in the Journal of the American Medical Association. By analyzing data from the National Health Nutrition Examination Survey, Cao and her team found that, for both adults and teens, the total time sitting increased by about an hour per day from 2007 to 2016 (from about 6.4 hours to 8.2 hours a day).
The amount of time spent sitting and watching television or videos was generally stable between 2001 and 2016 — at least two hours a day — but as time went on, people in all age groups reported spending more of their leisure time sitting while using the computer. By 2016, half of adults reported doing this for at least an hour a day, up from 29 percent in 2003. This was true for 57 percent of teens, up from 53 percent. “[People] work indoors more than ever before and this may also change their leisure time activity as well,” explains Cao. Though using computers is now a common leisure activity, that may not have been the case in the past when computers were less ubiquitous. Cao adds that people may not be aware that sitting too much can be dangerous, especially since “there’s no clear intervention” to address this issue in most schools and workplaces.
Standing desks have become more popular, partly in response to the fear of sedentary behavior, but this quick tech fix might not be the answer. First, there’s little scientific proof that they improve people’s health. As The New York Times puts it, “standing is not exercise,” and it’s a better idea to take a short walk than to just stand instead of sit. Plus, though exercise is pretty much unequivocally good, it’s important that the discussion around sitting and health outcomes is nuanced. It’s possible that sitting is a symptom of other issues that cause bad health, not necessarily a key cause of bad health itself. For example, people who are unemployed may sit a lot, but the challenges associated with unemployment (including financial worries, stress, or depression) may present bigger issues than sitting.
So what’s the takeaway? Don’t rush to spend thousands on a fancy new standing desk, but if you can, take a minute to stand up and go on a nice walk outside.
]]>“I think it’s significant that I haven’t left the Bay Area,” says writer and artist Jenny Odell, who grew up in Cupertino and now lives in Oakland. Odell, who also teaches at Stanford University, is the author of How to Do Nothing: Resisting the Attention Economy (out now from Melville House).
Many writers have sounded the alarm over our increasingly fractured attention, but Odell’s book is not about blocking the internet or retreating from Facebook. Odell does focus on the importance of reclaiming attention and focus, but she also tells people what to do when they’re not staring at Facebook: go out into the natural world instead. Learn the names of the plants, the history of the region, the songs of the birds. When you can distinguish sounds and petals and regions, you will never be able to see the same way again.
The Verge spoke to Odell about the importance of place, the role of technology in grounding us, and different types of stillness.
This interview has been lightly edited for clarity.
Your book was interesting to me not just because you posit “place” as to the answer to the attention economy, but because of the particular place you talk about. I also grew up in the Bay Area, near Cupertino, but I left just about as soon as I could. What made you stay?
I place a lot of value on community and knowing people for a long time. When you’re in a group of people that is making art or writing, it’s really nice to have that community where you and your work are well-known and supported.
That’s interesting because, in the book, you talk about why it’s important to know the place you’re from and be grounded, in a sense, instead of floating in this contextless world of the internet. How did those ideas — place versus internet — start to intersect for you?
Some of it is a natural consequence of learning about the place that I have been for my entire life and realizing how little I actually knew and being really fascinated and kind of taken by that. This is going to sound really nerdy, but I took some urban studies classes in college and read some stuff about suburbia and the type of urbanism that it turns out there is in Cupertino. And then I went home, and I was like, ”Oh my god, I have a vocabulary for this now!” When I was growing up, the phenomenon of McMansions coming in and taking over was just kind of a given, something my family and I would talk about but didn’t really think about. I came back many years later and was able to say, “This is the result of such-and-such zoning,” and I started to see it as not given and part of a larger context.
“ I just have this fixation with the idea of attending to what is already there”
That’s sort of a general thing. Specifically, after 2016, I had that experience of sitting in the rose garden and thinking about why that was valuable. Part of the answer was, “Oh, I’m here. I’m in a place. I can pay attention to it.” It has a grounding quality.
You seem to argue that the physical is more “real” than the online. In past years, I feel like there was an emphasis on how “real” online connections are, like “online friendships are real, too!” How do we reconcile these strands of thought?
Well, I don’t actually disagree with that argument. One of the things I talk about at the end of the book is how social media has the potential to be hugely useful — you’re connecting people who are in the same place, it’s very fast, you can share knowledge very easily — but in my mind, that could be especially useful when mixed together with things like in-person meetings or more intentional communications. Even a group chat to me is closer to an in-person meeting. So there are ways where the digital and the physical are working together.
I like to collect examples of places and situations where there’s not even an overlap but a reverberation between physical experience and digital representation. In one of my classes, we talk about places where you see the digital and the physical interacting in places where it’s very inextricable. My example is this mountain — Mission Peak in Fremont — and at one point, it became very popular to take a photo on top of this pole that was on top of the mountain. And now everybody needs to have that photo, so all of these people started clambering on this mountain and going off designated trails. And in my mind, the mountain is being eroded by Instagram.
You write about how you began using the app iNaturalist to help you start identifying flora and fauna and get more grounded in place. At one point, a student asks you whether that’s “taking you away” from the experience, and you say it isn’t. How do we evaluate when technology brings us closer to the world versus further away?
As an artist and someone who’s writing about attention, I’m partial to uses of technology that give you more information about what you’re looking at. So one of my examples is just a pair of binoculars. A pair of binoculars is a form of technology, and you can say there’s something “unnatural” about the view through a pair of binoculars because it’s augmenting human vision. But if I go out without my binoculars, I can’t see as much, and I can’t be as curious as I would be otherwise. It gives me a new experience of a place and more to look at, literally. I tend to really value things like that. iNaturalist is an easy example, others are those constellation-identifying apps.
One of the most interesting parts of the book to me is when you critique people who say that attention is a structural problem, and it’s the responsibility of tech companies to divert it in an ethical way. That’s a view we frequently hear, but you say that it takes away our agency and ability to decide what we want to look at since it’s still the companies funneling our attention. What else is missing from this argument that companies can help us reclaim our attention?
I think one really important aspect that comes up in any form of the attention economy, whether it’s disempowering or empowering, is assuming that attention is like currency. Most currency is standardized. We don’t barter anymore, so it relies on the idea of standardized this and uniformity and consistency. And in my experience, attention isn’t like that. You have forms of shallow attention, you have really deep attention, you pay different kinds of attention to different things in different situations.
Differentiation and proliferation of attention are things you can learn, which is one of the reasons I talk so much about art in that chapter. That dimension of attention and human perception is missing from that formulation for me.
I think many people will like your book because it’s not draconian, and it doesn’t try to make us delete our accounts forever and run away to the woods. Can you tell me more about why you advocate a more moderate approach?
The very first reason is that it’s impossible. Maybe there are some mountain hermits that will prove me wrong, but for most of us, something like that is not even feasible. It’s really interesting and important to register that impulse, which is almost commonplace now, but on the level of actual feasibility, it’s not something that you’re probably going to do. But on top of that, if you’re buying the book, you’re ultimately concerned about doing something, and a lot of the anxiety that gets exploited by the attention economy comes from a very real feeling of living in a difficult time and wanting to do something about it or feel useful. I think that’s something that would eventually catch up with most people. Maybe I’m just extrapolating from myself, but I assume that someone who is concerned enough about what is happening with their attention to buy the book is ultimately wanting to say or do something meaningful at the end of the day.
A book called How to Do Nothing is obviously going to talk about the virtues of being still. But as you mentioned, there are different types of stillness. As I was reading, I couldn’t stop thinking about how the choreographer George Balanchine distinguished between two kinds of stillness: that of a cat sitting there and that of a cat sitting there ready to pounce. What kind of stillness are you advocating for?
I really love that. I think it’s probably both, or maybe alternating between the two. Something I have thought about more since writing the book is this idea of moving between different states of attention in your mind. It feels like a movement or a kind of shift and, ideally, you could do that with volition and intention rather than being jerked around and always staying in that shallow state of attention.
So I would say it’s important to know when to rest and also important to know when to be in this second state of stillness where you’re shrewdly observing the situation from the outside. Those are definitely not the same thing and they’re both really necessary. Without some sort of actual rest, maybe the other one isn’t possible. You wouldn’t be as sharp.
The last question I have is about maintenance. You argue that we care too much about the new and disruptive instead of maintaining what is there. What would a focus on maintenance instead of “disruption” look like?
I’ve been really inspired by local groups here in Oakland that do things like steward a local creek or literal habitat restoration stuff. Groups of people who come together and feel responsible for something that is living in the place where they live, and just observing the amount of work that goes into that. Of course, there are examples that are not obviously environmental that someone could find an entry point with. I just have this fixation with the idea of attending to what is already there, so the first step is looking around and seeing what is already there and what needs support before jumping off into “I need to make XYZ.”
]]>This year, April 20th feels different. Marijuana enthusiasts have long celebrated 4/20, but now that businesses are seeing profit potential, they are getting involved, too, with things like $4.20 Lyft credits and CBD-infused Carl’s Jr. hamburgers. Marijuana is a big business, even if it’s not legal everywhere yet. So what’s the current state of marijuana legalization?
In a nutshell, things look promising. From a public opinion standpoint, marijuana legalization has become very popular. As Vox’s German Lopez has noted, three major national polls highlight just how quickly support is growing. All three show that more than 60 percent of Americans support legalization. According to Gallup, for example, 66 percent of Americans supported legalization in 2018, compared to about 60 percent just two years earlier.
The trend holds when it comes to state-level legislation. The 2016 election was a tipping point for marijuana legalization, and three states voted in favor during the 2018 midterm elections, including Michigan, the first state in the Midwest to do so. In total, 10 states and Washington, DC have legalized marijuana, and others have decriminalized the drug. States like Ohio and Arizona are likely to have ballot initiatives pushing for legalization soon.
“The political wind is certainly in favor now of marijuana legalization,” says Steve Hawkins, executive director of the Marijuana Policy Project. “If we have 10 states right now that have legalized marijuana, we could easily see 10 more states by the end of 2020.”
Hawkins’ group believes that once marijuana is legal in 25 states, it’ll “create that tipping point” for Congress to end federal prohibition. “It would be a mistake to think that Congress will act because of federal lobbying efforts,” Hawkins says. “Lawmakers have to hear from their constituents. Change will come when there’s a chorus of state voices exerting pressure on Congress to act.”
That said, federal reform of marijuana laws has become a key talking point for the 2020 presidential Democratic candidates. Sens. Bernie Sanders (I-VT), Cory Booker (D-NJ), Kamala Harris (D-CA), Kirsten Gillibrand (D-NY), and Elizabeth Warren (D-MA) have all spoken out in favor of loosening marijuana laws.
“If we have 10 states right now that have legalized marijuana, we could easily see 10 more states by the end of 2020.”
Justin Strekal, political director of the National Organization for the Reform of Marijuana Laws (NORML), points out that the previous Congress introduced 63 bills related to cannabis prohibition. “That number is larger than every previous session of Congress combined,” he says. The bills have addressed everything from ending the policy of criminalization to providing resources to expunge criminal records to narrow legislation that would allow the Department of Veterans Affairs to conduct research trials.
Some of the legislation under debate now includes the Ending Federal Marijuana Prohibition Act, which would officially remove marijuana from the Controlled Substances Act and let states regulate. (The Drug Enforcement Administration currently lists marijuana as a Schedule I drug, meaning it is considered more dangerous than cocaine and meth). There is also the SAFE Banking Act, which would allow financial institutions to work with cannabis companies without fear of punishment by the federal government, and the RESPECT Resolution, which encourages states to adopt certain best practices to address racial disparities in the marijuana industry and help repair the effects of the war on drugs. Even though black and white Americans use marijuana at similar rates, black people are nearly four times more likely to be arrested for marijuana possession, according to one 2013 report. This general pattern holds even after legalization.
For Strekal, there are plenty of questions that remain about how legalization might play out. For example, will federal policy reform end the criminalization of marijuana, or will it restrict federal enforcement of the Controlled Substances Act? Will a federal comprehensive bill include aspects of restorative justice for those affected by criminalization? Would it include resources for those in the underground economy to work in the legal marijuana marketplace? “We’ve now moved the conversation from an ‘if’ to a ‘how,’” says Strekal. “And the ‘how’ matters.”
]]>