Is multitasking really a modern-day evil?

In A Prehistory of Ordinary People, anthropologist Monica Smith argues that rather than deploring multitasking, we should celebrate it as the human ability that separates us from other animals.

Her thesis that we owe our success to our ability to juggle multiple competing demands and to pick up and put down the same project until completion certainly makes a good point. Yes, memory and imagination (our ability to project into the future) enable us to remember the tasks we’re in the middle of, and allow us to switch between tasks. And this is undeniably a good thing.

I agree (and I don’t think have ever denied) that multitasking is not in itself ‘bad’. I don’t think it’s new, either. These are, I would suggest, straw men — but I’m not decrying her raising them. Reports in the media are prone to talking about multitasking as if it is evil and novel, and a symptom of all that is wrong in modern life. It is right to challenge those assumptions.

The problem with multitasking is not that it is inherently evil. The point is to know when to stop.

There are two main dangers with multitasking, which we might term the acute and the chronic. The acute danger is when we multitask while doing something that has the potential to risk our own and others’ safety. Driving a vehicle is the obvious example, and I have reported on many studies over the past few years that demonstrate the relative dangers of different tasks (such as talking on a cellphone) while driving a car. Similarly, interruptions in hospitals increase the probability of clinical errors, some of which can have dire consequences. And of course on a daily level, acute problems can arise when we fail to do one task adequately because we are trying to do other tasks at the same time.

A chronic danger of multitasking that has produced endless articles in recent years is the suggestion that all this technology-driven multitasking is making us incapable of deep thought or focused attention.

But Smith argues that we do not, in fact, engage in levels of multitasking that are that much different from those exhibited in prehistoric times. ‘That much’ is of course the get-out phrase. How much difference is too much? Is there a point at which multitasking is too much, and have we reached it?

These are the real questions, and I don’t think the answer is something we can draw a line with. Research with driver-multitasking has revealed significant differences between drivers, as a function of age, as a function of personal attributes, as a function of emotional or physical state. It has revealed differences between tasks —e.g. talking that involves emotions or decisions is more distracting than less engaging conversation; half-overheard conversations are surprisingly distracting (suggesting that having a passenger in the car talking on a phone may be more distracting than doing it yourself!). These are the sort of things we need to know — not that multitasking is bad, but when it is bad.

This approach applies to the chronic problem also, although it is much more difficult to study. But these are some of the questions we need to know the answers to:

  • Does chronic multitasking affect our long-term ability to concentrate, or only our ability to concentrate while in the multitasking environment?
  • If it does affect our long-term ability to concentrate, can we reverse the effect? If so, how?
  • Is the effect on children and adolescents different from that of adults?
  • Does chronic multitasking produce beneficial cognitive effects? If so, is this of greater benefit for some people rather than others? (For example, multitasking training may benefit older adults)
  • What are the variables in multitasking that affect our cognition in these ways? (For example, the number of tasks being performed simultaneously; the length of time spent on each one before switching; the number of times switching occurs within a defined period; the complexity of the tasks; the ways in which these and other factors might interact with temporary personal variables, such as mood, fatigue, alcohol, and more durable personal variables such as age and personality)

We need to be thinking in terms of multitasking contexts rather than multitasking as one uniform (and negative) behavior. I would be interested to hear your views on multitasking contexts you find beneficial, pleasant or useful, and contexts you find difficult, unpleasant or damaging.

Shaping your cognitive environment for optimal cognition

Humans are the animals that manipulate their cognitive environment.

I reported recently on an intriguing study involving an African people, the Himba. The study found that the Himba, while displaying an admirable amount of focus (in a visual perception task) if they were living a traditional life, showed the same, more de-focused, distractible attention, once they moved to town. On the other hand, digit span (a measure of working memory capacity) was smaller in the traditional Himba than it was in the urbanized Himba.

This is fascinating, because working memory capacity has proved remarkably resistant to training. Yes, we can improve performance on specific tasks, but it has proven more difficult to improve the general, more fundamental, working memory capacity.

However, there have been two areas where more success has been found. One is the area of ADHD, where training has appeared to be more successful. The other is an area no one thinks of in this connection, because no one thinks of it in terms of training, but rather in terms of development — the increase in WMC with age. So, for example, average WMC increases from 4 chunks at age 4, to 5 at age 7, 6 at age 10, to 7 at age 16. It starts to decrease again in old age. (Readers familiar with my work will note that these numbers are higher than the numbers we now tend to quote for WMC — these numbers reflect the ‘magic number 7’, i.e. the number of chunks we can hold when we are given the opportunity to actively maintain them.)

Relatedly, there is the Flynn effect. The Flynn effect is ostensibly about IQ (specifically, the rise in average IQ over time), but IQ has a large WM component. Having said that, when you break IQ tests into their sub-components and look at their change over time, you find that the Digit Span subtest is one component that has made almost no gain since 1972.

But of course 1972 is still very modern! There is no doubt that there are severe constraints on how much WMC can increase, so it’s reasonable to assume we long since hit the ceiling (speaking of urbanized Western society as a group, not individuals).

It’s also reasonable to assume that WMC is affected by purely physiological factors involving connectivity, processing speed and white matter integrity — hence at least some of the age effect. But does it account for all of it?

What the Himba study suggests (and I do acknowledge that we need more and extended studies before taking these results as gospel), is that urbanization provides an environment that encourages us to use our working memory to its capacity. Urbanization provides a cognitively challenging environment. Our focus is diffused for that same reason — new information is the norm, rather than the exception; we cannot focus on one bit unless it is of such threat or interest that it justifies the risk.

ADHD shows us, perhaps, what can happen when this process is taken to the extreme. So we might take these three groups (traditional Himba, urbanized Himba, individuals with ADHD) as points on the same continuum. The continuum reflects degree of focus, and the groups reflect environmental effects. This is not to say that there are not physiological factors predisposing some individuals to react in such a way to the environment! But the putative effects of training on ADHD individuals points, surely, to the influence of the environment.

Age provides an intriguing paradox, because as we get older, two things tend to happen: we have a much wider knowledge base, meaning that less information is new, and we usually shrink our environment, meaning again that less information is new. All things being equal, you would think that would mean our focus could afford to draw in. However, as my attentive readers will know, declining cognitive capacity in old age is marked by increasing difficulties in ignoring distraction. In other words, it’s the urbanization effect writ larger.

How to account for this paradox?

Perhaps it simply reflects the fact that the modern environment is so cognitively demanding that these factors aren’t sufficient on their own to enable us to relax our alertness and tighten our focus, in the face of the slowdown in processing speed that typically occurs with age (there’s some evidence that it is this slowdown that makes it harder for older adults to suppress distracting information). Perhaps the problem is not simply, or even principally, the complexity of our environment, but the speed of it. You only have to compare a modern TV drama or sit-com with one from the 70s to see how much faster everything now moves!

I do wonder if, in a less cognitively demanding environment, say, a traditional Himba village, whether WMC shows the same early rise and late decline. In an environment where change is uncommon, it is natural for elders to be respected for their accumulated wisdom — experience is all — but perhaps this respect also reflects a constancy in WMC (and thus ‘intelligence’), so that elders are not disadvantaged in the way they may be in our society. Just a thought.

Here’s another thought: it’s always seemed to me (this is not in any way a research-based conclusion!) that musicians and composers, and writers and professors, often age very well. I’ve assumed this was because they are keeping mentally active, and certainly that must be part of it. But perhaps there’s another reason, possibly even a more important reason: these are areas of expertise where the proponent spends a good deal of time focused on one thing. Rather than allowing their attention to be diffused throughout the environment all the time, they deliberately shut off their awareness of the environment to concentrate on their music, their writing, their art.

Perhaps, indeed, this is the shared factor behind which activities help fight age-related cognitive decline, and which don’t.

I began by saying that humans are the animals that manipulate their cognitive environment. I think this is the key to fighting age-related cognitive decline, or ADHD if it comes to that. We need to be aware how much our brains try to operate in a way that is optimal for our environment — meaning that, by controlling our environment, we can change the way our brain operates.

If you are worried about your ‘scattiness’, or if you want to prevent or fight age-related cognitive decline, I suggest you find an activity that truly absorbs and challenges you, and engage in it regularly.

The increase in WMC in Himba who moved to town also suggests something else. Perhaps the reason that WM training programs have had such little success is because they are ‘programs’. What you do in a specific environment (the bounds of a computer and the program running on it) does not necessarily, or even usually, transfer to the wider environment. We are contextual creatures, used to behaving in different ways with different people and in different places. If we want to improve our WMC, we need to incorporate experiences that challenge and extend it into our daily life.

This, of course, emphasizes my previous advice: find something that absorbs you, something that becomes part of your life, not something you 'do' for an hour some days. Learn to look at the world in a different way, through music or art or another language or a passion (Civil War history; Caribbean stamps; whatever).

You can either let your cognitive environment shape you, or shape your cognitive environment.

Do you agree? What's your cognitive environment, and do you think it has affected your cognitive well-being?

What babies can teach us about effective information-seeking and management

Here’s an interesting study that’s just been reported: 72 seven- and eight-month-old infants watched video animations of familiar fun items being revealed from behind a set of colorful boxes (see the 3-minute YouTube video). What the researchers found is that the babies reliably lost interest when the video became too predictable – and also when the sequence of events became too unpredictable.

In other words, there’s a level of predictability/complexity that is “just right” (the researchers are calling this the ‘Goldilocks effect’) for learning.

Now it’s true that the way babies operate is not necessarily how we operate. But this finding is consistent with other research suggesting that adult learners find it easier to learn and pay attention to material that is at just the right level of complexity/difficulty.

The findings help explain why some experiments have found that infants reliably prefer familiar objects, while other experiments have found instead a preference for novel items. Because here’s the thing about the ‘right amount’ of surprise or complexity — it’s a function of the context.

And this is just as true for us adults as it is for them.

We live in a world that’s flooded with information and change. Clay Shirky says: “There’s no such thing as information overload — only filter failure.” Brian Solis re-works this as: “information overload is a symptom of our inability to focus on what’s truly important or relevant to who we are as individuals, professionals, and as human beings.”

I think this is simplistic. Maybe that’s just because I’m interested in too many things and they all tie together in different ways, and because I believe, deeply, in the need to cross boundaries. We need specialists, sure, because every subject now has too much information even for a specialist to master. But maybe that’s what computers are going to be for. More than anything else, we need people who can see outside their specialty.

Part of the problem as we get older, I think, is that we expect too much of ourselves. We expect too much of our memory, and we expect too much of our information-processing abilities. Babies know it. Children know it. You take what you can; each taking is a step; on the next step you will take some more. And eventually you will understand it all.

Perhaps it is around adolescence that we get the idea that this isn’t good enough. Taking bites is for children; a grown-up person should be able to read a text/hear a conversation/experience an event and absorb it all. Anything less is a failure. Anything less is a sign that you’re not as smart as others.

Young children drive their parents crazy wanting the same stories read over and over again, but while the stories may seem simple to us, that’s because we’ve forgotten how much we’ve learned. Probably they are learning something new each time (and quite possibly we could learn something from the repetitions too, if we weren’t convinced we already knew it all!).

We don’t talk about the information overload our babies and children suffer, and yet, surely, we should. Aren’t they overloaded with information? When you think about all they must learn … doesn’t that put our own situation in perspective?

You could say they are filtering out what they need, but I don’t think that’s accurate. Because they keep coming back to pick out more. What they’re doing is taking bites. They’re absorbing what they need in small, attainable bites. Eventually they will get through the entire meal (leaving to one side, perhaps, any bits that are gristly or unpalatable).

The researchers of the ‘Goldilocks’ study tell parents they don’t need to worry about providing this ‘just right’ environment for their baby. Just provide a reasonably stimulating environment. The baby will pick up what they need at the time, and ignore the rest.

I think we can learn from this approach. First of all, we need to cultivate an awareness of the complexity of an experience (I’m using this as an umbrella word encompassing everything from written texts to personal events), being aware that any experience must be considered in its context, and that what might appear (on present understanding) to be quite simple might become less so in the light of new knowledge. So the complexity of an event is not a fixed value, but one that reflects your relationship to it at that time. This suggests we need different information-management tools for different levels of complexity (e.g., tagging that enables you to easily pull out items that need repeated experiencing at appropriate occasions).

(Lucky) small children have an advantage (this is not the place to discuss the impact of ‘disadvantaged’ backgrounds) — the environment is set up to provide plenty of opportunities to re-experience the information they are absorbing in bites. We are not so fortunate. On the other hand, we have the huge advantage of having far more control over our environment. Babies may use instinct to control their information foraging; we must develop more deliberate skills.

We need to understand that we have different modes of information foraging. There is the wide-eyed, human-curious give-me-more mode — and I don’t think this is a mode to avoid. This wide, superficial mode is an essential part of what makes us human, and it can give us a breadth of understanding that can inform our deeper knowledge of specialist subjects. We may think of this as a recreational mode.

Other modes might include:

  • Goal mode: I have a specific question I want answered
  • Learning mode: I am looking for information that will help me build expertise in a specific topic
  • Research mode: I have expertise in a topic and am looking for information in a specific part of that domain
  • Synthesis mode: I have expertise in one topic and want information from other domains that would enrich my expertise and give me new perspectives

Perhaps you can think of more; I would love to hear other suggestions.

I think being consciously aware of what mode you are in, having specific information-seeking and information-management tools for each mode, and having the discipline to stay in the chosen mode, are what we need to navigate the information ocean successfully.

These are some first thoughts. I would welcome comments. This is a subject I would like to develop.


  • Doing more than one task at a time requires us to switch our attention rapidly between the tasks.
  • This is easier if the tasks don't need much attention.
  • Although we think we're saving time, time is lost when switching between tasks; these time costs increase for complex or unfamiliar tasks.
  • Both alcohol and aging affect our ability to switch attention rapidly.

A very common situation today, which is probably responsible for a great deal of modern anxiety about failing memory, is that where we're required to “multitask”, that trendy modern word for trying to do more than one thing at a time. It is a situation for which both the normal consequences of aging and low working memory capacity has serious implications.

There’s an old insult along the lines of “he can’t walk and chew gum”. The insult is a tacit acknowledgment that doing two things at the same time can put a strain on mental resources, and also recognizes (this is the insult part!) that well-practiced activities do not place as much demand on our cognitive resources. We can, indeed, do more than one task at a time, as long as only one of the tasks requires our attention. It is attention that can’t be split.

You may feel that you can, in fact, do two tasks requiring attention simultaneously. For example, talking on a cellphone and driving!

Not true.

What you are in fact doing, is switching your attention rapidly between the two tasks, and you are doing it at some cost.

How big a cost depends on a number of factors. If you are driving a familiar route, with no unexpected events (such as the car in front of you braking hard, or a dog running out on the road), you may not notice the deterioration in your performance. It also helps if the conversation you are having is routine, with little emotional engagement. But if the conversation is stressful, or provokes strong emotion, or requires you to think … well, any of these factors will impact on your ability to drive.

The ability to switch attention between tasks is regulated by a function called prefrontal cortex. This region of the brain appears to be particularly affected by aging, and also by alcohol. Thus, talking on a cellphone while driving drunk is a recipe for disaster! Nor do you have to actually be under the influence to be affected in this way by alcohol; impaired executive control is characteristic of alcoholics.

More commonly, we get older, and as we get older we become less able to switch attention fast.

The ability to switch attention is also related to working memory capacity.

But multitasking is not only a problem for older adults, or those with a low working memory capacity. A study [1] using young adults found that for all types of tasks, time was lost when switching between tasks, and time costs increased with the complexity of the tasks, so it took significantly longer to switch between more complex tasks. Time costs also were greater when subjects switched to tasks that were relatively unfamiliar.

Part of the problem in switching attention is that we have to change “rules”. Rule activation takes significant amounts of time, several tenths of a second — which may not sound much, but can mean the difference between life and death in some situations (such as driving a car), and which even in less dramatic circumstances, adds appreciably to the time it takes to do tasks, if you are switching back and forth repeatedly.

To take an example close to home, people required to write a report while repeatedly checking their email took half again as long to finish the report compared to those who didn't switch between tasks!

In other words, while multitasking may seem more efficient, it may not actually BE more efficient. It may in fact take more time in the end, and the tasks may of course be performed more poorly. And then there is the stress; switching between tasks places demands on your mental resources, and that is stressful. (And not only are we poorer at such task-switching as we age, we also tend to be less able to handle stress).

There is another aspect to multitasking that deserves mention. It has been speculated that rapid switching between tasks may impede long-term memory encoding. I don’t know of any research on this, but it is certainly plausible.

So, what can we do about it?

Well, the main thing is to be aware of the problems. Accept that multitasking is not a particularly desirably situation; that it costs you time and quality of performance; that your ability to multitask will be impeded by fatigue, alcohol, stress, emotion, distraction (e.g., don’t add to your problems by having music on as well); that your ability will also be impaired by age. Understand that multitasking involves switching attention between tasks, not simultaneous performance; and that it will therefore be successful to the extent that the tasks are familiar and well-practised.

This article originally appeared in the February 2005 newsletter.

Planning to Remember


Rubinstein, J.S., Meyer, D.E. & Evans, J.E. 2001. Executive Control of Cognitive Processes in Task Switching. Journal of Experimental Psychology - Human Perception and Performance, 27 (4), 763-797.