I remember reading at some point in the literature that humans are unable to avoid distraction from a certain primary task if this task isn't using up close to all available 'resources' - basically meaning that everytime the human brain has cognitive resources left, they'll be allocated to one thing or another.
Is that right? If not, at what point is my understanding wrong? Can you give a relevant reference concerning this question?
Edit: I'm referring to distraction in the context of traffic from the perspective of the individual driver.
Hancock, P. A., Mouloua, M., & Senders, J. W. (2009). On the philosophical foundations of the distracted driver and driving distraction.
I think you probably need to more clearly define what you mean by distraction - what sorts of events or objects in the world distract, and what does it mean to be distracted? - but between subjects differences in susceptibility to distraction have been noted in the literature on field dependency at least http://www.amsciepub.com/doi/abs/10.2466/pms.1918.104.22.1685
5 Ways To Grab Your Customer’s Attention in a Distracted World
By Team Braze Jan 13 2016
Marketers in today’s always-connected, information-rich world face an enormous challenge: A consumer’s brain can hold only so much information before it becomes fatigued. A person’s attention span is unavoidably scarce, and with the explosion of information available in our hands every day through personal devices, mobile marketers are fighting big odds to reach and engage their audiences.
By definition, a limited resource has value, making it a currency. This economic concept, called attention economics, treats human attention as a scarce commodity because a person only has so much of it. Moreover, the more information that is available, the more expensive attention becomes.
With a passion for the economics of attention, Harvard Business School Associate Professor Thales Teixeira has conducted research and published findings on how to leverage this limited resource. He promotes a scientific rigor to allow for a dependable, repeatable process that helps marketers engage more effectively with their audiences. His studies have centered largely around on-air and online video ads, but his findings are relevant to capturing attention with omnichannel and retention marketing strategies and on consumers’ mobile phones, where attention is even more fragmented.
Teixeira has calculated the cost of attention to have jumped seven- to nine-fold in real terms since 1990, making it the most dramatic business expense increase in the last 25 years. The solution? Teixeira urges marketers to consider how they can capture attention in a cost effective way.
If Multitasking Is Impossible, Why Are Some People So Good at It?
Everybody multitasks. We have conversations while driving. We answer email while browsing the Web. It's hard to imagine living any other way. What would be the alternative, removing the seats from your car to ensure you only drive alone? Block every website not named Gmail? A world of constant single-tasking is too absurd to contemplate.
But science suggests that multitasking as we know it is a myth. "Humans don't really multitask," said Eyal Ophir, the primary researcher with the Stanford Multitasking study. "We task-switch. We just switch very quickly between tasks, and it feels like we're multitasking."
In 1946, the world was introduced to history's first general-purpose electronic computer: ENIAC, nicknamed the "Giant Brain." At the time, the word multi-tasking did not exist. It first appeared in a magazine called Datamation in 1966, according to the Oxford English Dictionary, in the following sentence: "Multi-tasking is defined as the use of a single CPU for the simultaneous processing of two or more jobs."
Over the next 65 years, computers have become multitasking wizards, with the ability to download movies while playing music while running complex programs and executing a million other functions we take for granted, yet in 1946 would have seemed like magic. Meanwhile, the people operating these wondrous machines have not gotten any better at multitasking over the last 60 years. If anything, we have gotten worse.
In The Shallows, a book about memory and the Internet, Nicholas Carr said the Web was changing the way we think, read and remember. Humans are hunters and hoarders of information. We seek, we find, we remember. If the Internet is helping us seek and find data, it is hurting our ability to absorb and retain it. Before the Internet, the theory goes, our attentions expanded vertically. With the Internet, our focus extends horizontally, and shallowly.
Why do we think we're so good at something that doesn't exist? We compensate for our inability to multitask with a remarkable ability to single-task in rapid succession. Our brains aren't a volley of a thousand arrows descending on an opposing army. Our brains are Robin Hood. One man with one bow firing on all comers, one at a time.
If multitasking is a myth, it might come as a surprise that some people are good at it. It turns out that people who multitask -- or rapid-fire-single-task -- less are better at firing the next arrow of attention at a new task. A famous media multitasking study found that "heavy" multitaskers are more susceptible to distractions and therefore worse at task-switching effectively. This makes sense if you consider multitasking to be "the art of paying attention." Heavy multitaskers roll out the welcome mat for every new distraction. Of course they can't pay attention to things. Attention isn't their intent.
Attention is important. And light multitaskers might be better at preserving their attention. But some people value distraction. They knowingly seek the thrill of the new. In an interview with Boing Boing, Ophir made the essential point that it's hard to determine what kind of workers are most "effective" at multitasking until you determine what they want from their work.
"I think heavy multitaskers are not less effective -- they simply have a different goal," he said. "Where you might say traditionally we value the ability to focus through distractions, they are willing to sacrifice focus in order to make sure they don't miss an unexpected, but rewarding, surprise. As a result, they might do worse in the office scenario I described, but they might also be the first to slam on the brakes in the car/mobile phone scenario."
The upshot is that it's pointless to say that one type of worker is good at multitasking, and another is bad. Instead, there is a limited supply of this thing called attention, and a million ways to divide, manage, and preserve it. For some people, a state of deep focus is office nirvana. For others, perpetual distraction is an office necessity. You fire your arrows the way you want.
A New Theory of Distraction
“At painful times, when composition is impossible and reading is not enough, grammars and dictionaries are excellent for distraction,” the poet Elizabeth Barrett Browning wrote, in 1839. Those were the days. Browning is still right, of course: ask any reader of Wikipedia or Urban Dictionary. She sounds anachronistic only because no modern person needs advice about how to be distracted. Like typing, Googling, and driving, distraction is now a universal competency. We’re all experts.
Still, for all our expertise, distraction retains an aura of mystery. It’s hard to define: it can be internal or external, habitual or surprising, annoying or pleasurable. It’s shaped by power: where a boss sees a distracted employee, an employee sees a controlling boss. Often, it can be useful: my dentist, who used to be a ski instructor, reports that novice skiers learn better if their teachers, by talking, distract them from the fact that they are sliding down a mountain. (He’s an expert distractor in his current job, too the last time he cleaned my teeth, he hummed all of “You Make Loving Fun,” including the guitar solo.) There are, in short, varieties of distracted experience. It’s hard to generalize about such a changeable phenomenon.
Another source of confusion is distraction’s apparent growth. There are two big theories about why it’s on the rise. The first is material: it holds that our urbanized, high-tech society is designed to distract us. In 1903, the German sociologist Georg Simmel argued, in an influential essay called “The Metropolis and Mental Life,” that in the tech-saturated city “stimulations, interests, and the taking up of time and attention” turn life into “a stream which scarcely requires any individual efforts for its ongoing.” (In the countryside, you have to entertain yourself.) One way to understand the distraction boom, therefore, is in terms of the spread of city life: not only has the world grown more urban, but digital devices let us bring citylike experiences with us wherever we go.
The second big theory is spiritual—it’s that we’re distracted because our souls are troubled. The comedian Louis C.K. may be the most famous contemporary exponent of this way of thinking. A few years ago, on “Late Night” with Conan O’Brien, he argued that people are addicted to their phones because “they don’t want to be alone for a second because it’s so hard.” (David Foster Wallace also saw distraction this way.) The spiritual theory is even older than the material one: in 1874, Nietzsche wrote that “haste is universal because everyone is in flight from himself” in the seventeenth century, Pascal said that “all men’s miseries derive from not being able to sit in a quiet room alone.” In many ways, of the two, the material theory is more reassuring. If the rise of distraction is caused by technology, then technology might reverse it, while if the spiritual theory is true then distraction is here to stay. It’s not a competition, though in fact, these two problems could be reinforcing each other. Stimulation could lead to ennui, and vice versa.
A version of that mutual-reinforcement theory is more or less what Matthew Crawford proposes in his new book, “The World Beyond Your Head: Becoming an Individual in an Age of Distraction” (Farrar, Straus & Giroux). Crawford is a philosopher whose last book, “Shop Class as Soulcraft,” proposed that working with your hands could be an antidote to the sense of uselessness that haunts many knowledge workers. (Kelefa Sanneh reviewed it for this magazine, in 2007.) Crawford argues that our increased distractibility is the result of technological changes that, in turn, have their roots in our civilization’s spiritual commitments. Ever since the Enlightenment, he writes, Western societies have been obsessed with autonomy, and in the past few hundred years we have put autonomy at the center of our lives, economically, politically, and technologically often, when we think about what it means to be happy, we think of freedom from our circumstances. Unfortunately, we’ve taken things too far: we’re now addicted to liberation, and we regard any situation—a movie, a conversation, a one-block walk down a city street—as a kind of prison. Distraction is a way of asserting control it’s autonomy run amok. Technologies of escape, like the smartphone, tap into our habits of secession.
The way we talk about distraction has always been a little self-serving—we say, in the passive voice, that we’re “distracted by” the Internet or our cats, and this makes us seem like the victims of our own decisions. But Crawford shows that this way of talking mischaracterizes the whole phenomenon. It’s not just that we choose our own distractions it’s that the pleasure we get from being distracted is the pleasure of taking action and being free. There’s a glee that comes from making choices, a contentment that settles after we’ve asserted our autonomy. When you write an essay in Microsoft Word while watching, in another window, an episode of “American Ninja Warrior”—trust me, you can do this—you’re declaring your independence from the drudgery of work. When you’re waiting to cross the street and reach to check your e-mail, you’re pushing back against the indignity of being made to wait. Distraction is appealing precisely because it’s active and rebellious.
Needless to say, not all distractions are self-generated the world is becoming ever more saturated with ads. And this, Crawford thinks, has turned distraction into a contest between corporate power and individual will. In the airport, for example, we listen to music through headphones to avoid listening to CNN. There’s a sense, he argues, in which personal-technology companies are in an arms race with advertising and marketing firms. If you go to the movies and turn off your phone prematurely, you may be stuck watching the pre-preview ads—but, if you have an Apple Watch, you can still assert your autonomy by scrolling through lists and checking your step count. Fundamentally, of course, the two sides are indistinguishable: they both speak in what Crawford calls “autonomy talk,” “the consumerist language of preference satisfaction,” in which consumer choice is identified with liberation and happiness. “Choice serves as the central totem of consumer capitalism, and those who present choices to us appear as handmaidens to our own freedom,” he writes.
We are now cocooned, Crawford argues, within centuries’ worth of technology designed to insure our autonomy—the smartphone just represents the innermost layer. If you check Twitter from your tablet computer while watching “Game of Thrones” on demand, or listen to Spotify while working on a spreadsheet in your cubicle, than you’re taking advantage of many technologies of autonomy at once. A central irony of modern life, Crawford writes, is that even within our cocoons the “cultural imperative of being autonomous” is as strong as ever. That imperative depends on the “identification of freedom with choice, where choice is understood as a pure flashing forth of the unconditioned will” (a click, a scroll, a tap). Despite the revolutionary rhetoric of technology companies, we’re less like revolutionaries than like gamblers in a casino. A gambler experiences winning and losing he takes risks and makes fateful choices. But he does all this inside a “highly engineered environment,” and his experiences are mere simulacra of what they would be outside of it. Just as ironic winning—winning that is, in the long run, losing—is at the center of the gambler’s life, so ironic freedom—action that is actually distraction—has become a “style of existence” for the modern person.
Given the extremity of his vision, you half expect Crawford to propose a radical solution: Burn it all down! Dismantle the Matrix! But his suggestions turn out to be humbler. “The image of human excellence I would like to offer as a counterweight to freedom thus understood,” he writes, “is that of a powerful, independent mind working at full song.” “Working” is the key word. Much of “The World Beyond Your Head” is about people who do work to which they can’t help but pay attention: short-order cooks, hockey players, motorcycle racers, glassblowers. These workers, Crawford writes, endeavor to bring themselves “into a relation of fit” with a demanding world. When a line cook rushes to keep up with new orders, or when a motorcyclist anticipates a patch of slick road, they are simultaneously “limited and energized” by the constraints they encounter. (There’s little solace in the book for committed office workers Crawford himself has foregone a traditional academic career to run a business that manufactures custom motorcycle parts.) The point is that these workers, who are immersed in what they do, are not really autonomous instead, they are keyed into the real world (the demanding kitchen, the unpredictable road). They aren’t living in their heads, but sensing the grip of the tires on the asphalt, the heat of the flames at the grill. “Joy is the feeling of one’s powers increasing,” he writes. Distraction is the opposite of joy, which becomes rarer as we spend more time in a frictionless environment of easy and trivial digital choices.
“The World Beyond Your Head” is insightful and, in parts, convincing. Its problem, ironically, is one of focus. Crawford ends up seeing pretty much all of modern life as a source of distraction. Conversely, he appears satisfied only while developing a narrow range of manly skills. And he overstates the power of what is, for the most part, a merely annoying aspect of contemporary life. He’s not alone in this: many writers on distraction present it as an existential cataclysm. A previous and influential book on the subject, by the journalist Maggie Jackson, was called “Distracted: The Erosion of Attention and the Coming Dark Age.”
Why do so many writers find distraction so scary? The obvious answer is that they’re writers. For them, more than for other people, distraction really is a clear and present danger. Before writing “Shop Class as Soulcraft,” Crawford earned a Ph.D. in philosophy at the University of Chicago. Distraction is even scarier for graduate students a few years spent working on a dissertation leaves you primed to fear and loathe it out of all proportion.
More generally, distraction is scary for another, complementary reason: the tremendous value that we’ve come to place on attending. The modern world valorizes few things more than attention. It demands that we pay attention at school and at work it punishes parents for being inattentive it urges us to be mindful about money, food, and fitness it celebrates people who command others’ attention. As individuals, we derive a great deal of meaning from the products of sustained attention and concentration—from the projects we’ve completed, the relationships we’ve maintained, the commitments we’ve upheld, the skills we’ve mastered. Life often seems to be “about” paying attention—and the general trend seems to be toward an ever more attentive way of life. Behind the crisis of distraction, in short, there is what amounts to a crisis of attention: the more valuable and in demand attention becomes, the more problematic even innocuous distractions seem to be. (Judging by self-help books, distraction and busyness have become the Scylla and Charybdis of modern existence.)
As with our autonomy obsession, this extreme valuing of attention is a legacy of the Enlightenment: the flip side of Descartes’s “I think, therefore I am” is that we are what we think about. The problem with this conception of selfhood is that people don’t spend all their time thinking in an organized, deliberate way. Our minds wander, and life is full of meaningless moments. Whole minutes go by during which you listen to Rihanna in your head, or look idly at people’s shoes, or remember high school. Sometimes, your mind is just a random jumble of images, sensations, sounds, recollections at other times, you can stare out the window and think about nothing. This kind of distracted time contributes little to the project of coherent selfhood, and can even seem to undermine it. Where are you when you play Temple Run? Who are you when you look at cat GIFs? If you are what you think about, then what are you when your thoughts don’t add up to anything? Getting distracted, from this perspective, is like falling asleep. It’s like hitting pause on selfhood.
What is to be done about this persistent non-self, or anti-self? You can double down, of course, and attempt, as Crawford does, to sculpt a better you—one in which distraction is replaced with attention. Or you can try, as various people have, to reconceive the self in a way that makes sense of distracted time. Freud, for instance, offered an interpretation of the mind’s apparent randomness. The Surrealists tried to make art out of it. Various philosophers have argued that the self is less coherent than we think it should be. My favorite approach is the one James Joyce took, in “Ulysses”: he just accepted this non-self, in a “no judgment” kind of way. In “Ulysses,” the characters are always distracted. They hum songs in their heads, long for food, have idle sex fantasies. Because they don’t feel guilty about this, they never remark upon it. In fact, they hardly ever feel bad about the thoughts in their heads.
What is the Gestalt Theory?
Gestalt is a decisive trend in psychology history. It was born in Germany at the beginning of the 20th century. It was Christian von Ehrenfels, an Austrian philosopher, who gave this movement its name in The Attributes of Form, his most important work. There is no perfect English translation of the term “gestalt”. But we can interpret it as “totality”,”figure”,”structure”,”configuration” or “organized unity”.
“The whole is more than the sum of its parts” is its maximum. The main authors of Gestalt proposed alternatives to the dominant psychological paradigms and made great contributions to cognitive psychology.
This particular focus was a breath of fresh air and allowed people who did not feel represented by the main currents of psychology to find an alternative.
Related to this story
“When they’re in situations where there are multiple sources of information coming from the external world or emerging out of memory, they’re not able to filter out what’s not relevant to their current goal,” said Wagner, an associate professor of psychology. “That failure to filter means they’re slowed down by that irrelevant information.”
So maybe it’s time to stop e-mailing if you’re following the game on TV, and rethink singing along with the radio if you’re reading the latest news online. By doing less, you might accomplish more.
Divided Attention While Driving
In chapter 4 of Goldstein’s Cognitive Psychology, we learned about how being inattentive while driving can lead to more frequent crashes as well as near crashes. In fact, a research study found that 80% of crashes and 67% of near crashes were the result of inattentive focus three seconds beforehand (Goldstein 94). For many years, I used a GPS system in my car to navigate myself to and from points of destination. One near accident from using my GPS while driving was all it took for me to realize the dangers of trying to juggle multiple cognitive tasks while simultaneously operating a vehicle. Luckily, I learned my lesson without the devastation that results from being in a bad car accident, but I hope that as research begins to surface its way into the media more frequently that people will start to realize that juggling multiple complex cognitive tasks while driving is not a skill one aims to master, but an impossible task for even the most skilled drivers.
I learned my lesson about the dangers of shifting focus while driving several years ago while I was using my GPS to navigate me through downtown Los Angeles. After getting off my suggested exit, I noticed orange cones and a construction crew blocking off the street my GPS was guiding me towards. As a man guided the cars ahead of me around the construction site, I found myself glancing at my GPS, when suddenly the car in front of me made an abrupt stop, forcing me to drop my GPS and slam on my breaks. I ended up missing the car in front of me by about an inch, and felt my heart jump out of my chest. From that moment on I swore to never use my GPS while driving.
For many years I assumed that juggling cognitive tasks while driving was a skill that could mastered, but in time I realized just how wrong I was in this assumption. Due to media reports revealing the dangers of texting or talking on a phone while driving, I always tried to avoid using my cell phone while behind the wheel, but for some odd reason made exceptions for other devices such an a GPS, an iPod, or a quick meal on-the-go. I figured that I had this multi-tasking thing down, and that I could easily shift my attention while driving and manage to safely arrive at whatever destination I sought at the time. One heart-pounding almost accident exposed just how wrong I was, and made me realize that all the skill in the world can not prepare you for sudden cognitively complex situations on the road.
To conclude, while driving, one must remember to remain focused on the road and to always be prepared to suddenly stop, change lanes, or go around because often times obstacles present themselves when we least expect them and we have to always be ready to react quickly and efficiently. The bottom line is that cognitive multi-tasking while driving is not a skill that one can master with practice, but a risky activity for even the most skilled drivers. Therefore, any and all distractions that draw our eyes from the road to another point of view are potentially dangerous and ought to be avoided to prevent future accidents.
Goldstein, B. (2011). Attention. In Cognitive Psychology Connecting mind, research, and everyday experience (3rd ed.). Belmont, CA: Wadsworth.
Why do we Need a Taxonomy?
Over the years, magicians have acquired vast amounts of useful knowledge about effective misdirection. Although much of this knowledge has been discussed in theoretical articles and books, it tends to be described only in the context of individual magic tricks making sense of—or even just accessing—this knowledge is often challenging for both magicians and non-magicians alike.
One way to handle this is via a taxonomy. These are central to many scientific domains, aiding our understanding in fields such as chemistry, biology, and even mineralogy. If we intend to truly understand any aspect of magic—including misdirection𠅊 taxonomy must be a crucial part of this endeavor (Rensink and Kuhn, under review).
Previous taxonomies of misdirection were developed from the perspective of magic performance (Leech, 1960 Ascanio, 1964 Randal, 1976 Bruno, 1978 Sharpe, 1988), or were based on rather informal psychological principles (Lamont and Wiseman, 1999). The central aim of our effort is to develop a more rigorous and less subjective system, one based as much as possible on known psychological mechanisms. Among other things, this approach can help draw more direct links between practical principles and current scientific understanding of the human mind.
You might also Like
I understand the selective attention and selective hearing, but what do you call it when you send someone an email so the words are written in from to them and yet they interpret it as something completely different.
For example, what if I write, "I don't think he will like that," and someone reads it to say, 'He does not like that," even though that is not what I wrote.
It can sometimes easily be seen if you just identify the noun and the verb in the sentence. The noun in the first sentence is "I", verb: "think". In the second sentence, the noun is, "He", verb: "like". Therefore "I think" and "He likes" are completely different in meaning. And yet this happens all the time. Why? What is this? It seems to be happening more and more these days. Is it an attention disorder? mutsy September 29, 2010
Cupcakes15-This often leads a teacher to think that the child is not working up to their potential even though the child may be suffering from a learning disability.
An educational psychologist as well as a neurologist should test children afflicted with this condition.
Miami Children’s Hospital has a program devoted to diagnosing and treating ADHD. A children’s hospital is an excellent resource in order to treat children selective attention memory.
Often these children suffer from low attention memory which makes building concepts in learning difficult.
There is a non profit organization that has multiple chapters throughout the country that offer a support group for parents as well as children affected with ADHD.
This group called CHADD gives children a chance to meet other kids with the same condition. It also provides parents with information that could help them deal with the disorder in a more positive fashion.
Moldova- That is so true. Selective attention perception is low among those afflicted with ADHD.
ADHD is an attention deficit hyperactivity disorder that affects both children and adults. It could be a very frustrating condition because the person afflicted with this condition has trouble finishing a task or project.
Often, their mind becomes distracted and any minor stimuli can set this off. Many children fall behind in school because not only can they not finish their school work, but it is difficult for them to concentrate when teachers are offering lectures.
This makes learning difficult and is not uncommon for children with ADHD to be retained due to the academic difficulties.
It is a misconception that these children are not smart, on the contrary many have gifted intellect, but although their mental capacity for some subjects is superior, they are not always balanced in all subjects. Moldova September 29, 2010
One example of selective attention is when a child does not listen when discussing a chore or homework assignment that needs to be done and eventually does not do it.
This same child could be told pleasant news about a trip or a toy that you would buy for them and they are all ears.
In this situation, the child tunes out what they don’t want to hear, but absorbs what they do enjoy hearing.
This is why most people are not effective when they resort to nagging. The reason is simple, children tend to tune out negative information and if the nagging is lengthy enough they will avoid anything that is mentioned.
In order to develop attention memory, it is best to have the child repeat the task that you asked him or her to do. This way you can check to see if the child is paying attention. This is just one attention theory regarding selective attention in psychology.
Imagine that driving across town, you’ve fallen into a reverie, meditating on lost loves or calculating your next tax payments. You’re so distracted that you rear-end the car in front of you at 10 miles an hour. You probably think: Damn. My fault. My mind just wasn’t there.
By contrast, imagine that you drive across town in a state of mild exhilaration, multitasking on your way to a sales meeting. You’re drinking coffee and talking to your boss on a cellphone, practicing your pitch. You cause an identical accident. You’ve heard all the warnings about cellphones and driving—but on a gut level, this wreck might bewilder you in a way that the first scenario didn’t. Wasn’t I operating at peak alertness just then? Your brain had been aroused to perform several tasks, and you had an illusory sense that you must be performing them well.
That illusion of competence is one of the things that worry scholars who study attention, cognition, and the classroom. Students’ minds have been wandering since the dawn of education. But until recently—so the worry goes—students at least knew when they had checked out. A student today who moves his attention rapid-fire from text-messaging to the lecture to Facebook to note-taking and back again may walk away from the class feeling buzzed and alert, with a sense that he has absorbed much more of the lesson than he actually has.
“Heavy multitaskers are often extremely confident in their abilities,” says Clifford I. Nass, a professor of psychology at Stanford University. “But there’s evidence that those people are actually worse at multitasking than most people.”
Indeed, last summer Nass and two colleagues published a study that found that self-described multitaskers performed much worse on cognitive and memory tasks that involved distraction than did people who said they preferred to focus on single tasks. Nass says he was surprised at the result: He had expected the multitaskers to perform better on at least some elements of the test. But no. The study was yet another piece of evidence for the unwisdom of multitasking.
Experiments like that one have added fuel to the perpetual debate about whether laptops should be allowed in classrooms. But that is just one small, prosaic part of this terrain. Nass and other scholars of attention and alertness say their work has the potential to illuminate unsettled questions about the nature of learning, memory, and intelligence.
As far back as the 1890s, experimental psychologists were testing people’s ability to direct their attention to multiple tasks. One early researcher asked her subjects to read aloud from a novel while simultaneously writing the letter A as many times as possible. Another had people sort cards of various shapes while counting aloud by threes.
Those early scholars were largely interested in whether attention is generated by conscious effort or is an unwilled effect of outside forces. The consensus today is that there are overlapping but neurologically distinct systems: one of controlled attention, which you use to push yourself to read another page of Faulkner, and one of stimulus-driven attention, which kicks in when someone shatters a glass behind you.
But those scholars also became intrigued by the range of individual variation they found. Some people seemed to be consistently better than others at concentrating amid distraction. At the same time, there were no superstars: Beyond a fairly low level of multitasking, everyone’s performance breaks down. People can walk and chew gum at the same time, but not walk, chew gum, play Frisbee, and solve calculus problems.
In a famous paper in 1956, George A. Miller (then at Harvard, now at Princeton) suggested that humans’ working-memory capacity—that is, their ability to juggle facts and perform mental operations—is limited to roughly seven units. When people are shown an image of circles for a quarter of a second and then asked to say how many circles they saw, they do fine if there were seven or fewer. (Sometimes people do well with as many as nine.) Beyond that point, they estimate. Likewise, when people are asked to repeat an unfamiliar sequence of numbers or musical tones, their limit on a first try is roughly seven.
And that is under optimal conditions. If a person is anxious or fatigued or in the presence of an attractive stranger, his working-memory capacity will probably degrade.
What Miller called the informational bottleneck has been recognized as a profound constraint on human cognition. Crudely speaking, there are two ways to manage its effects. One is to “chunk” information so that you can, in effect, pack more material into one of those seven units. As Miller put it, “A man just beginning to learn radiotelegraphic code hears each dit and dash as a separate chunk. Soon he is able to organize these sounds into letters, and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases.” That sort of process is obviously central to many kinds of learning.
The second method for managing the bottleneck—and the one that concerns us here—is to manage attention so that unwanted stimuli do not crowd the working memory. That might sound simple. But as the Swedish neuroscientist Torkel Klingberg explains in his recent book The Overflowing Brain: Information Overload and the Limits of Working Memory (Oxford University Press), scholars are far from agreement about how to describe the relationship between attention and working memory. Does a poor attention system cause poor working-memory performance, or does the causation sometimes work in the other direction?
One common metaphor is that controlled attention acts as a “nightclub bouncer,” preventing irrelevant stuff from getting into working memory. A few years ago, Klingberg and a colleague conducted brain-imaging experiments that suggested that a region known as the globus pallidus seems to be highly active when people successfully fend off distraction.
“Why is it that some people seem to reason well and others don’t?” asks Michael J. Kane, an associate professor of psychology at the University of North Carolina at Greensboro. “Variability in working-memory capacity accounts for about half the variability in novel reasoning and reading comprehension. There’s disagreement about what to make of that relationship. But there are a number of mechanisms that seem to be candidates for part of the story.”
One of those seems to be attentional, Kane says. “The view that my colleagues and I are putting forward is that part of the reason that people who differ in working-memory capacity differ in other things is that higher-working-memory-capacity people are simply better able to control their attention.”
In other words—to borrow a metaphor from other scholars—people with strong working-memory capacities don’t have a larger nightclub in their brains. They just have better bouncers working the velvet rope outside. Strong attentional abilities produce stronger fluid intelligence, Kane and others believe.
Attention and distraction are entangled not only in reasoning and working memory, but also in the encoding of information into long-term memory.
In 2006 a team of scholars led by Karin Foerde, who is now a postdoctoral fellow in psychology at Columbia University, reported on an experiment suggesting that distraction during learning can be harmful, even if the distraction doesn’t seem to injure students’ immediate performance on their tasks.
Foerde and her colleagues asked their subjects to “predict the weather” based on cues that they slowly learned over many computer trials. For example, seeing an octagon on the screen might mean that there was a 75-percent chance of rain on the next screen. The subjects would never be told the exact percentage, but gradually they would learn to infer that most of the time, an octagon meant rain.
During one of their four training runs, the subjects were distracted by a task that asked them to count musical tones while they did the forecasting. At first glance, the distraction did not seem to harm the subjects’ performance: Their “weather forecasts” under distraction were roughly as accurate as they were during the other three trials.
But when they were asked afterward to describe the general probabilistic rules for that trial (for example, a triangle meant sunshine 80 percent of the time), they did much worse then they did after the undistracted trials.
Foerde and her colleagues argue that when the subjects were distracted, they learned the weather rules through a half-conscious system of “habit memory,” and that when they were undistracted, they encoded the weather rules through what is known as the declarative-memory system. (Indeed, brain imaging suggested that different areas of the subjects’ brains were activated during the two conditions.)
That distinction is an important one for educators, Foerde says, because information that is encoded in declarative memory is more flexible—that is, people are more likely to be able to draw analogies and extrapolate from it.
“If you just look at performance on the main task, you might not see these differences,” Foerde says. “But when you’re teaching, you would like to see more than simple retention of the information that you’re providing people. You’d like to see some evidence that they can use their information in new ways.”
If single-minded attention is vital to learning, how far should college instructors go to protect their students from distraction? Should laptops be barred at the classroom door?
One prominent scholar of attention is prepared to go even further than that.
“I’m teaching a class of first-year students,” says David E. Meyer, a professor of psychology at the University of Michigan at Ann Arbor. “This might well have been the very first class they walked into in their college careers. I handed out a sheet that said, ‘Thou shalt have no electronic devices in the classroom.’ . I don’t want to see students with their computers out, because you know they’re surfing the Web. I don’t want to see them taking notes. I want to see them paying attention to me.”
Wait a minute. No notes? Does that include pen-and-paper note-taking?
“Yes, I don’t want that going on either,” Meyer says. “I think with the media that are now available, it makes more sense for the professor to distribute the material that seems absolutely crucial either after the fact or before the fact. Or you can record the lecture and make that available for the students to review. If you want to create the best environment for learning, I think it’s best to have students listening to you and to each other in a rapt fashion. If they start taking notes, they’re going to miss something you say.”
Give Meyer his due. He has done as much as any scholar to explain how and why multitasking degrades performance. In a series of papers a decade ago, he and his colleagues determined that even under optimal conditions, it takes a significant amount of time for the brain to switch from one goal to another, and from one set of rules to another.
“I’ve done demonstrations in class,” Meyer says, “whereby they can see the costs of multitasking as opposed to paying attention diligently to just one stream of input.”
He might, for example, ask students to recite the letters A through J as fast as possible, and then the numbers 1 through 10. Each of those tasks typically takes around two seconds. Then he asks them to interweave the two recitations as fast as they can: “A, 1, B, 2,” and so on. Does that take four seconds? No, it typically requires 15 to 20 seconds, and even then many students make mistakes.
“This is because there is a switching time cost whenever the subject shifts from the letter-recitation task to the number-recitation task, or vice versa,” Meyer says. “And those extra time costs quickly add up.”
Several other scholars of attention, however, concede that they haven’t tried to set firm rules about laptops in class.
“I’ve thought about having a special laptop section in my lecture hall,” says Kane, the psychologist at Greensboro. “That way students wouldn’t have to be distracted by their neighbors’ screens if they don’t want to be.” Beyond that, however, Kane is reluctant to move. Many students do legitimately take notes on laptops, and he doesn’t want to prevent that.
Stanford’s Nass, likewise, allows laptops in his classes, though he feels sheepish about that choice, given his research. “It would just seem too strange to ban laptops in a class on computers and society,” he says.
Many other scholars say instructors should make peace with the new world of skimming and multitasking. N. Katherine Hayles, a professor emerita of English at the University of California at Los Angeles, has argued in a series of essays that the new, multimedia world generates “hyper attention"—which is different from, but not necessarily worse than, attention as traditionally understood. In a media-rich environment, she believes, young people’s brains are getting better at making conceptual connections across a wide variety of domains.
“One of the basic tenets of good teaching is that you have to start where the students are,” Hayles says. “And once you find out where they are, a good teacher can lead them almost anywhere. Students today don’t start in deep attention. They start in hyper attention. And our pedagogical challenge will be to combine hyper attention with deep attention and to cultivate both. And we can’t do that if we start by stigmatizing hyper attention as inferior thinking.”
Nass is skeptical. In a recent unpublished study, he and his colleagues found that chronic media multitaskers—people who spent several hours a day juggling multiple screen tasks—performed worse than otherwise similar peers on analytic questions drawn from the LSAT. He isn’t sure which way the causation runs here: It might be that media multitaskers are hyperdistractible people who always would have done poorly on LSAT questions, even in the pre-Internet era. But he worries that media multitasking might actually be destroying students’ capacity for reasoning.
“One of the deepest questions in this field,” Nass says, “is whether media multitasking is driven by a desire for new information or by an avoidance of existing information. Are people in these settings multitasking because the other media are alluring—that is, they’re really dying to play Freecell or read Facebook or shop on eBay—or is it just an aversion to the task at hand?”
When Nass was a high-school student, decades ago, his parents were fond of an old quotation from Sir Joshua Reynolds: “There is no expedient to which man will not resort to avoid the real labor of thinking.” That is the conundrum that has animated much of his career.
“I don’t think that law students in classrooms are sitting there thinking, Boy, I’d rather play Freecell than learn the law,” Nass says. “I don’t think that’s the case. What happens is that there’s a moment that comes when you say, Boy, I can do something really easy, or I can do something really hard.”