Making Babies Scared of Bunnies: The Roots of Fear in Advertising

The following is the latest in a new series of articles on AlterNet called Fear in America that launched this March. Read the introduction to the series. “Fear is as primal a factor as love in influencing personality,” wrote psychologists John B. Watson and Rosalie Rayner in 1920. “Fear does not gather its potency in any derived manner from love. It belongs to the original and inherited nature of man.” The names of Watson and Rayner may have fallen into obscurity, but Professor Watson’s influence has been profound. He founded the school of psychology known as Behaviorism, which focused on observable expressed behaviors instead of introspective conditions of the mind, and he argued that human behavior was susceptible to training, as with any other mammal. Watson and Rayner’s experiments with manipulating the fear responses of a baby boy remain one of the most infamous examples of unethical practices with human subjects. Watson and his graduate student, Rayner, attempted to make a baby known as “Albert B” afraid of cute fuzzy animals. The experiment succeeded. Watson and Rayner started with the premise that fear was innate. The experiment explored three primary questions: 1) Could fear toward certain objects be taught? 2) Would the child learn to fear associated objects on his own? 3) Once fear had been established, would it remain? In other words, can we control the shape and direction of fear by manipulating the subject’s environment, thereby creating a series of phobic associations in the mind? Watson would use a technique based on Nobel Prize-winner Ivan Pavlov’s famous experiments with dogs, who had been trained to salivate at the sound of a buzzer. Using these same principles of association, Watson and Rayner attempted to “condition” Albert to be afraid of white rats and rabbits by pairing the sight of them with a loud clanging noise. The testing on little Albert started when he was nine months old. Why did his mother allow the testing? Nobody knows. Watson described her as “a wet nurse at the Harriet Lane Home for Invalid Children.” However, there were no wet nurses at Harriet Lane, and Watson was known to be sloppy about those sorts of details. Speculating about the mother’s motivation leads nowhere, since we don’t know with certainty who she was. That sort of guessing also belongs to one of the schools of psychology that Watson opposed. Because Watson burned the notes associated with the experiment, the identity of baby and mother have long been an academic mystery. What we do know, based on a scholarly paper Watson and Rayner published in 1920, is that Albert was likely chosen because he was an “unusually phlegmatic” baby, so calm and even-tempered that when he was suddenly confronted with a “white rat, a rabbit, a dog, a monkey, with masks with and without hair, cotton wool [and] burning newspapers,” he showed no signs of fear. By the time the experiment was finished, baby Albert “burst into tears” at the sight of them, even after 11 months had passed. He was never deconditioned, because finding out whether these artificially induced fears would last beyond infancy was part of the experiment. All of this is interesting on its own, but what makes the story particularly fascinating is the twist: In 1920, Watson was fired from his prestigious academic post at Johns Hopkins University. Ostensibly, it wasn’t due to the now-glaring ethical lapses in the Albert B. experiment. As psychiatrist Jean Kim noted in an email to me, research institutions and hospitals now have review boards that “examine adherence to informed consent and ethics before approval,” but these standards are anachronistically applied to research conducted in the immediate aftermath of World War I. Instead, the married professor was fired over his scandalous affair with Rayner. His next job? Advertising. Starting at a low level, Watson quickly rose to become vice-president of J. Walter Thompson, the great rival of the fictional advertising firm of Sterling Cooper on Mad Men. Once in place at this New York firm, still one of the biggest names in advertising, Watson’s behaviorist theories could be put in place. “His doctrine,” wrote historian Peggy J. Kreshel, “which recognized prediction and control as the goal of psychology, meshed well not only with broader Progressive concerns of social control, but more particularly with the goals of the business community.” Watson is broadly credited with putting “science” into advertising, but he is more specifically the father of “psychological advertising.” Watson concluded that logic failed to convince consumers to buy the advertised product because humans are irrational. Using scientific methods, he also determined that consumers couldn’t tell one product apart from another. So, he decided to “sell the image.” As far as he was concerned, the most effective way to do this was by inducing fear: “Watson told advertisers to stir up the consumer’s emotions: “tell him something that will tie him up with fear, something that will stir up a mild rage, that will call out an affectionate or love response, or strike at a deep psychological or habit need.” By dint of the same mechanism of association that made Albert fearful of white fluffy animals and objects (including Santa Claus), one of Watson’s most successful ad campaigns convinced women that Pebeco toothpaste made them sexy. On the face of it, neither association —white fur with “threat,” toothpaste with “allure”—stands to reason. Which is precisely the point. But Watson’s work also showed that that the cycle of fear can be broken. In a lesser-known study of a boy named Peter, he completed a process now known as “desensitization.” A nervous child, Peter was Albert’s temperamental opposite. Peter started out afraid of white rats and rabbits, and lost his fear of them following repeated exposures paired with pleasurable rewards given to him. Because Peter’s fear of cute mammals stands apart from the sociocultural norm, it points to one of the lingering questions raised by Watson’s work: Are humans innately predisposed to like some animals and fear others, such as spiders? The answer seems to be yes and no. It’s a combination of atavistic instincts combined with modeling behaviors: an adult screams, so the baby screams, and the baby learns to be afraid of spiders. Loud sound and negative reaction equals fear. Hence, the way to fight irrational fear is through exposure coupled with the refusal to reward panic based on childish ignorance. “Fear is borne on the power of doubt,” science historian Rob Boddice wrote in an email to me. “Objects of fear fill the spaces where knowledge and certainty are absent. Since knowledge is hard to come by and even harder to disseminate, fear spreads like a virus, attaching to hearsay and heresy, until knowledge finally, reassuringly penetrates. But just as uncertainties are temporary, knowledge is historical, subject to revision and revolution. Thus, the objects of fear change over time.” One day, it’s white rats and rabbits. The next, it’s terrorists and Ebola. These are scary things, to be sure, yet if fear is an understandable response to vague and unfamiliar threats, to be ruled by that fear is to remain at the level of an animal. For Watson, those fearful/angry/lustful instincts were the reason Behaviorism worked in the first place. As far as he was concerned, humans were animals (and by that, no insult was intended), and that likeness required him to observe men with the same dispassionate eye as “an ox you slaughter,” and just as easily transformed into a docile and profitable body. In early-20th-century America, the susceptibility of humans to react like any other mammal to behavioral conditioning was as controversial as Darwinian evolution and for the same reason—the affront to human ego. But in the new 1930 introduction to Behaviorism, Watson was able to point out that behaviorist theory had already profoundly influenced various spheres of thought in just a few short years, quietly disseminating into business and design as the objections dribbled away. Almost a century has passed since Watson turned a hapless infant into a cringing wreck, fearful of bunnies, beards and Santa Claus. In the meantime, his psychological principles for achieving advertising success have arguably turned all denizens of consumer culture into baby Albert, manipulated by sounds and images into making associations that trigger three primal emotions: love, rage and fear. The applications have merely become more overt, with the designs of casinos and supermarkets carefully calibrated to create compulsions for items we don’t need. It’s become a popular truism that American culture treats consumers like lab rats on a wheel, dangling sex, food, revenge and rewards just out of reach, to be forever chased. Turns out that feeling of running for your life while being stuck in one place is more accurate than anyone knew.

The Surprising Amount of Time Kids Spend Looking at Screens

Lance Shields/Flickr Slouching posture, carpal-tunnel, neck strain, eye problems: The negative effects that technology use is having on humans’ bodies are surprising—and sometimes deadly. Kids who spend much of their days in and out of school, their faces glued to digital screens, may be establishing bad habits early. And according to a recent study by a group of Australian education and psychology experts, kids are spending more time with technology than researchers previously thought, far surpassing the American Academy of Pediatrics’ recommendation that screen time should be limited to two hours per day. The validity of the doctors’ guidelines is subject to question; even the study’s authors suggest adjusting the criteria to better align them with a world increasingly, and inevitably, inundated with technology. But pediatricians are closely monitoring the health risks associated with spending too time looking at screens, and they’re not yet convinced they should ease up the guidelines. As part of the study, published Wednesday in the journal BMC Public Health, the team of researchers surveyed more than 2,000 Australian students ages eight through 16. The researchers gave the participants computerized assessments and asked them to estimate the amount of time they spent on the exam. The researchers’ intent was to understand how much time the kids were looking at all types of screens. The research marks the first time scientists have looked at students’ overall media use, according to the study. Earlier studies have focused specifically on how kids use just TV or only computers. In fact, the study would suggest that many students worldwide are probably using technology much more than the recommended two-hours maximum every day. But that doesn’t mean they’re all using it in the same way. The researchers found big differences in how long the kids use screens depending on their age, their gender, and the activity type. Close to half—46 percent—of all third-grade boys, on average, use screens for more than two hours per day, and that usage increases to 70 percent of boys on average by the time they reach ninth grade. Fewer third-grade girls—43 percent—use screens compared with their male counterparts, but that rate jumps to surpass boys’ average usage in ninth grade, to more than 90 percent of girls. This discrepancy surprised the researchers because other studies have found that boys interact with screens more than girls overall. The reason behind this anomaly is difficult to deduce, but Victor Strasburger, a pediatrics professor at the University of New Mexico School of Medicine, speculates “girls may have been doing more homework than the boys, and the boys may have been doing more sports [away from screens] or playing more video-games on hand-held devices.” For general web use—which includes any research conducted for homework—students of both genders follow a similar pattern: Though fewer fifth graders than third graders use the Internet for more than two hours per day, the across-the-board usage rate steadily increases from fifth grade up to ninth. Almost half of ninth-grade girls, for example, are surfing the web (not necessarily using social media) for more than two hours every day. Strasburger says there’s no reason to think that the study would have had a different outcome had it been conducted in the U.S. In fact, he suspects that it’s “probably worse” in America, he said. And although some critics may point fingers at teachers who have integrated technology into the classroom—and might even condone excessive screen time— the Australian study shows that TV or movies, in-school Internet use, accounts for the majority of kids’ screen time. The American Academy of Pediatrics is perhaps the most influential organization to recommend that kids between ages three and 18 use screens for a maximum of two hours daily; and kids younger than three should avoid screens altogether, the academy says. But that’s not because researchers think digital media is inherently harmful. “If used appropriately, it’s wonderful,” Marjorie Hogan, a pediatrician at Hennepin County Medical Center in Minneapolis, told NPR. “We don’t want to demonize media, because it’s going to be a part of everybody’s lives increasingly, and we have to teach children how to make good choices around it, how to limit it and how to make sure it’s not going to take the place of all the other good stuff out there.” But pediatricians that help make the academy’s recommendations have to take into consideration more than just the education and career benefits. A number of studies have correlated extended screen time with various negative health effects. According to the the pediatric academy’s website, “Studies have shown that excessive media use can lead to attention problems, school difficulties, sleep and eating disorders, and obesity. In addition, the Internet and cell phones can provide platforms for illicit and risky behaviors.” More recently, researchers have discovered additional concerning effects, including changes in social behavior and even a disorder now called “text neck.” The Australian study gives rare, fleeting insight into how kids are using technology. Though that’s helpful to pediatricians and parents, some experts say it’s not yet enough information to merit a modification to the American Academy of Pediatrics’ formal recommendations. The problem is that it takes years—sometimes decades—to parse out the long-term health effects of different technologies, much less craft recommendations for how to use them. “The media are so different and ubiquitous, it’s like a mission impossible to get a good handle on who is using what when for how long,” Strasburger said. Strasburger helped develop the guidelines that were published in 2001, recommendations that were based on research that was conducted two years before. With no smartphones on the market and computer use less widespread than it is today, Strasburger and his colleagues really only considered TV, games, and movies as “screen time.” But, given the health studies available, they still adopted the same two-hour-maximum recommendation. Moreover, funding is notoriously tricky to secure for studies about long-term media use, and technology is changing faster than ever, so it’s almost impossible to come up with recommendations that are actually applicable to kids when they need them. Ultimately, complying with the rule is still worth a try, the Australian study authors conclude. Though they’re often ignored, recommendations like the ones from the American Academy of Pediatrics can help parents encourage their kids to use technology responsibly, Strasburger said. “During the week when kids are in school, if they’re spending something like five hours in front of the TV, not outside playing or doing homework or interacting with you [the parent], that’s a problem.” This article was originally published at