Skip to main content

Strategies

Choosing when to think fast & when to think slow

I recently read an interesting article in the Smithsonian about procrastination and why it’s good for you. Frank Partnoy, author of a new book on the subject, pointed out that procrastination only began to be regarded as a bad thing by the Puritans — earlier (among the Greeks and Romans, for example), it was regarded more as a sign of wisdom.

The examples given about the perils of deciding too quickly made me think about the assumed connection between intelligence and processing speed. We equate intelligence with quick thinking, and time to get the correct answer is part of many tests. So, regardless of the excellence of a person’s cognitive product, the time it takes for them to produce it is vital (in test).

Similarly, one of the main aspects of cognition impacted by age is processing speed, and one of the principal reasons for people to feel that they are ‘losing it’ is because their thinking is becoming noticeably slower.

But here’s the question: does it matter?

Certainly in a life-or-death, climb-the-tree-fast-or-be-eaten scenario, speed is critical. But in today’s world, the major reason for emphasizing speed is the pace of life. Too much to do and not enough time to do it in. So, naturally, we want to do everything fast.

There is certainly a place for thinking fast. I recently looked through a short book entitled “Speed Thinking” by Ken Huds. The author’s strategy for speed thinking was basically to give yourself a very brief window — 2 minutes — in which to come up with 9 thoughts (the nature of those thoughts depends on the task before you — I’m just generalizing the strategy here). The essential elements are the tight time limit and the lack of a content limit — to accomplish this feat of 9 relevant thoughts in 2 minutes, you need to lose your inner censor and accept any idea that occurs to you.

If you’ve been reading my last couple of posts on flow, it won’t surprise you that this strategy is one likely to produce that state of consciousness (at least, once you’re in the way of it).

So, I certainly think there’s a place for fast thinking. Short bouts like this can re-energize you and direct your focus. But life is a marathon, not a sprint, and of course we can’t maintain such a pace or level of concentration. Nor should we want to, because sometimes it’s better to let things simmer. But how do we decide when it’s best to think fast or best to think slow? (shades of Daniel Kahneman’s wonderful book Thinking, Fast and Slow here!)

In the same way that achieving flow depends on the match between your skill and the task demands, the best speed for processing depends on your level of expertise, the demands of the task, and the demands of the situation.

For example, Sian Beilock (whose work on math anxiety I have reported on) led a study that demonstrated that, while novice golfers putted better when they could concentrate step-by-step on the accuracy of their performance, experts did better when their attention was split between two tasks and when they were focused on speed rather than accuracy.

Another example comes from a monkey study that has just been in the news. In this study, rhesus macaques were trained to reach out to a target. To do so, their brains needed to know three things: where their hand is, where the target is, and the path for the hand to travel to reach the target. If there’s a direct path from the hand to the target, the calculation is simple. But in the experiment, an obstacle would often block the direct path to the target. In such cases, the calculation becomes a little bit more complicated.

And now we come to the interesting bit: two monkeys participated. As it turns out, one was hyperactive, the other more controlled. The hyperactive monkey would quickly reach out as soon as the target appeared, without waiting to see if an obstacle blocked the direct path. If an obstacle did indeed appear in the path (which it did on 2/3 trials), he had to correct his movement in mid-reach. The more self-controlled monkey, however, waited a little longer, to see where the obstacle appeared, then moved smoothly to the target. The hyperactive monkey had a speed advantage when the way was clear, but the other monkey had the advantage when the target was blocked.

So perhaps we should start thinking of processing speed as a personality, rather than cognitive, variable!

[An aside: it’s worth noting that the discovery that the two monkeys had different strategies, undergirded by different neural activity, only came about because the researcher was baffled by the inconsistencies in the data he was analyzing. As I’ve said before, our focus on group data often conceals many fascinating individual differences.]

The Beilock study indicates that the ‘correct’ speed — for thinking, for decision-making, for solving problems, for creating — will vary as a function of expertise and attentional demands (are you trying to do two things at once? Is something in your environment or your own thoughts distracting you?). In which regard, I want to mention another article I recently read — a blog post on EdWeek, on procedural fluency in math learning. That post referenced an article on timed tests and math anxiety (which I’m afraid is only available if you’re registered on the EdWeek site). This article makes the excellent point that timed tests are a major factor in developing math anxiety in young children. Which is a point I think we can generalize.

Thinking fast, for short periods of time, can produce effective results, and the rewarding mental state of flow. Being forced to try and think fast, when you lack the necessary skills, is stressful and non-productive. If you want to practice thinking fast, stick with skills or topics that you know well. If you want to think fast in areas in which you lack sufficient expertise, work on slowly and steadily building up that expertise first.

Taking things too seriously

I was listening to a podcast the other day. Two psychologists (Andrew Wilson and Sabrina Galonka) were being interviewed about embodied cognition, a topic I find particularly interesting. As an example of what they meant by embodied cognition (something rather more specific than the fun and quirky little studies that are so popular nowadays — e.g., making smaller estimations of quantities when leaning to the left; squeezing a soft ball making it more likely that people will see gender neutral faces as female while squeezing a hard ball influences them to see the faces as male; holding a heavier clipboard making people more likely to judge currencies as more valuable and their opinions and leaders as more important), they mentioned the outfielder problem. Without getting into the details (if you’re interested, the psychologists have written a good article on it on their blog), here’s what I took away from the discussion:

We used to think that, in order to catch a ball, our brain was doing all these complex math- and physics-related calculations — try programming a robot to do this, and you’ll see just how complex the calculations need to be! And of course this is that much more complicated when the ball isn’t aimed at you and is traveling some distance (the outfielder problem).

Now we realize it’s not that complicated — our outfielder is moving, and this is the crucial point. Apparently (according to my understanding), if he moves at the right speed to make his perception of the ball’s speed uniform (the ball decelerates as it goes up, and accelerates as it comes down, so the catcher does the inverse: running faster as the ball rises and slower as it falls), then — if he times it just right — the ball will appear to be traveling a straight line, and the mental calculation of where it will be is simple.

(This, by the way, is what these psychologists regard as ‘true’ embodied cognition — cognition that is the product of a system that includes the body and the environment as well as the brain.)

This idea suggests two important concepts that are relevant to those wishing to improve their memory:

We (like all animals) have been shaped by evolution to follow the doctrine of least effort. Mental processing doesn’t come cheap! If we can offload some of the work to other parts of the system, then it’s sensible to do so.

In other words, there’s no great moral virtue in insisting on doing everything mentally. Back in the day (2,500 odd years ago), it was said that writing things down would cause people to lose their ability to remember (in Plato’s Phaedrus, Socrates has the Egyptian god-pharaoh say to Thoth, the god who invented writing, “this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”)

This idea has lingered. Many people believe that writing reminders to oneself, or using technology to remember for us, ‘rots our brains’ and makes us incapable of remembering for ourselves.

But here’s the thing: the world is full of information. And it is of varying quality and importance. You might feel that someone should be remembering certain information ‘for themselves’, but this is a value judgment, not (as you might believe) a helpful warning that their brain is in danger of atrophying itself into terminal dysfunction. The fact is, we all choose what to remember and what to forget — we just might not have made a deliberate and conscious choice. Improving your memory begins with this: actually thinking about what you want to remember, and practicing the strategies that will help you do just that.

However, there’s an exception to the doctrine of least effort, and it’s evident among all the animals with sufficient cognitive power — fun. All of us who have enough brain power to spare, engage in play. Play, we are told, has a serious purpose. Young animals play to learn about the world and their own capabilities. It’s a form, you might say, of trial-&-error — but a form with enjoyability built into the system. This enjoyability is vital, because it motivates the organism to persist. And persistence is how we discover what works, and how we get the practice to do it well.

What distinguishes a good outfielder from someone who’s never tried to catch a ball before? Practice. To judge the timing, to get the movement just right — movement which will vary with every ball — you need a lot of practice. You can’t just read about what to do. And that’s true of every physical skill. Less obviously, it’s true of cognitive skills also.

It also ties back to what I was saying about trying to achieve flow. If you’re not enjoying what you’re doing, it’s probably either too easy or too hard for you. If it’s too easy, try and introduce some challenge into it. If it’s too hard, break it down into simpler components and practice them until you have achieved a higher level of competence on them.

Enjoyability is vital for learning well. So don’t knock fun. Don’t think play is morally inferior. Instead, try and incorporate a playful element into your work and study (there’s a balance, obviously!). If you have hobbies you enjoy, think about elements you can carry across to other activities (if you don’t have a hobby you enjoy, perhaps you should start by finding one!).

So the message for today is: the holy grail in memory and learning is NOT to remember everything; the superior approach to work / study / life is NOT total mastery and serious dedication. An effective memory is one that remembers what you want/need it to remember. Learning occurs through failure. Enjoyability greases the path to the best learning and the most effective activity.

Let focused fun be your mantra.

Daydreaming nurtures creativity?

Back in 2010, I read a charming article in the New York Times about a bunch of neuroscientists bravely disentangling themselves from their technology (email, cellphones, laptops, …) and going into the wilderness (rafting down the San Juan River) in order to get a better understanding of how heavy use of digital technology might change the way we think, and whether we can reverse the problem by immersing ourselves in nature.

One of those psychologists has now co-authored a study involving 56 people who participated in four- to six-day wilderness hiking, electronic-device-free, trips organized by Outward Bound schools. The study looked at the effect of this experience on creativity, comparing the performance of 24 participants who took the 10-item creativity test the morning before they began the trip, and 32 who took the test on the morning of the trip's fourth day.

Those few days in the wilderness increased performance on the task by 50% — from an average of 4.14 pre-trip to 6.08.

However, much as I like the idea, I have to say my faith in these results is not particularly great, given that there was a significant age difference between the two groups. The average age of the pre-hike group was 34, and that of the in-hike group 24. Why the researchers didn’t try to control this I have no idea, but I’m not convinced by their statement that they statistically accounted for age effects — which are significant.

Moreover, this study doesn’t tell us whether the effect was due to the experience of nature, simply the experience of doing something different, or the unplugging from technology. Still, it adds to the growing research exploring Attention Restoration Theory.

view from my office window
View from my window

I’m a great fan of nature myself, and count myself very fortunate to live surrounded by trees and within five minutes of a stream and bush (what people in other countries might call ‘woods’, though New Zealand bush is rather different). However, whether or not it is a factor in itself, there’s no denying other factors are also important — not least, perhaps, the opportunity to let your mind wander. “Mind wandering”, it has been suggested, evokes a unique mental state that allows otherwise opposing networks to work in cooperation, and stimulates problem-solving.

This is supported, perhaps, in another recent study. Again, I’m not putting too much weight on this, because it was a small study and most particularly because it was presented at a conference and very few details are available. But it’s an interesting idea, so let me give you the bullet points.

In the first study, 40 people were asked to copy numbers out of a telephone directory for 15 minutes, before being to complete a more creative task (coming up with different uses for a pair of polystyrene cups). Those who had first copied out the telephone numbers (the most boring task the researchers could think of) were more creative than a control group of 40 who had simply been asked to come up with uses for the cups, with no preamble.

In a follow-up experiment, an extra experimental group was added — these people simply read the phone numbers. While, once again, those copying the numbers were more creative than the controls, those simply reading the numbers scored the most highly on the creativity test.

The researchers suggest that boring activities that allow the most scope for daydreaming can lead to the most creativity. (You can read more about this study in the press release and in a Huffington Post article by one of the researchers.)

Remembering other research suggesting that thinking about your experiences when living abroad can make you more creative, I would agree, in part, with this conclusion: I think doing a boring task can help creativity, if you are not simply bogged down in the feeling of boredom, if you use the time granted you to think about something else — but it does matter what you think about!

The wilderness experiment has two parts to it: like the boring task, but to a much greater degree (longer span of time), it provides an opportunity to let your mind run free; like the living-abroad experiment, it puts you in a situation where you are doing something completely different in a different place. I think both these things are very important — but the doing-something-different is more important than putting yourself in a boring situation! Boredom can easily stultify the brain. The significance of the boredom study is not that you should do boring tasks to become more creative, but that, if you are doing something boring (that doesn’t require much of your attention), you should let your thoughts wander into happy and stimulating areas, not just wallow in the tedium!

But of course the most important point of these studies is a reminder that creativity - the ability to think divergently - is not simply something a person 'has', but that it flowers or dwindles in different circumstances. If you want to encourage your ability to think laterally, to solve problems, to be creative, then you need to nurture that ability.

Seeing without words

I was listening on my walk today to an interview with Edward Tufte, the celebrated guru of data visualization. He said something I took particular note of, concerning the benefits of concentrating on what you’re seeing, without any other distractions, external or internal. He spoke of his experience of being out walking one day with a friend, in a natural environment, and what it was like to just sit down for some minutes, not talking, in a very quiet place, just looking at the scene. (Ironically, I was also walking in a natural environment, amidst bush, beside a stream - but I was busily occupied listening to this podcast!)

Tufte talked of how we so often let words get between us and what we see. He spoke of a friend who was diagnosed with Alzheimer’s, and how whenever he saw her after that, he couldn’t help but be watchful for symptoms, couldn’t help interpreting everything she said and did through that perspective.

There are two important lessons here. The first is a reminder of how most of us are always rushing to absorb as much information as we can, as quickly as we can. There is, of course, an ocean of information out there in the world, and if we want to ‘keep up’ (a vain hope, I fear!), we do need to optimize our information processing. But we don’t have to do that all the time, and we need to be aware that there are downsides to that attitude.

There is, perhaps, an echo here with Kahnemann’s fast & slow thinking, and another to the idea that quiet moments of reflection during the day can bring cognitive benefits.

In similar vein, then, we’d probably all find a surprising amount of benefit from sometimes taking the time to see something familiar as if it was new — to sit and stare at it, free from preconceptions about what it’s supposed to be or supposed to tell us. A difficult task at times, but if you try and empty your mind of words, and just see, you may achieve it.

The second lesson is more specific, and applies to all of us, but perhaps especially to teachers and caregivers. Sometimes you need to be analytical when observing a person, but if you are interacting with someone who has a label (‘learning-disabled’, ‘autistic’, ‘Alzheimer’s’, etc), you will both benefit if you can sometimes see them without thinking of that label. Perhaps, without the preconception of that label, you will see something unexpected.

Have we really forgotten how to remember?

A new book, Moonwalking with Einstein: The Art and Science of Remembering Everything, has been creating some buzz recently. The book (I haven’t read it) is apparently about a journalist’s year of memory training that culminated in him making the finals of the U.S.A. Memory Championships. Clearly this sort of achievement resonates with a lot of people — presumably because of the widespread perception that forgetfulness is a modern-day plague, for which we must find a cure.

Let’s look at some of the points raised in the book and the discussion of it. There’s the issue of disuse. It’s often argued that technology, in the form of mobile phones and computers, means we no longer need to remember phone numbers or addresses. That calculators mean we don’t need to remember multiplication tables. That books mean we don’t need to remember long poems or stories (this one harks back to ancient times — the oft-quoted warning that writing would mean the death of memory).

Some say that we have forgotten how to remember.

The book recounts the well-known mnemonic strategies habitually used by those who participate in memory championships. These strategies, too, date back to ancient times. And you know something? Back then, just like now, only a few people ever bothered with these strategies. Why? Because for the most part, they’re far more trouble than they’re worth.

Now, this is not to say that mnemonic strategies are of no value. They are undoubtedly effective. But to achieve the sort of results that memory champions aspire to requires many many hours of effort. Moreover, and more importantly, these hours do not improve any other memory skills. That is, if you spend months practicing to remember playing cards, that’s not going to make you better at remembering the name of the person you met yesterday, or remembering that you promised to pick up the bread, or remembering what you heard in conversation last week. It’s not, in fact, going to help you with any everyday memory problem.

It may have helped you learn how to concentrate — but there are far more enjoyable ways to do that! (For example, both Lumosity and Posit Science offer games that are designed to help you improve your ability to concentrate. Both programs are based on cognitive science, and are run by cognitive scientists. Both advertise on my website.)

Does it matter that we can’t remember phone numbers? It’s argued that being unable to remember the phone numbers of even your nearest and dearest, if your phone has a melt-down, is a problem — although I don’t think anyone’s arguing that it’s a big problem. But if you are fretting about not being able to remember the numbers of those most important to you, the answer is simple, and doesn’t require a huge amount of training. Just make sure you make the effort to recall the number each time before you use it. After a while it’ll come automatically, and effortlessly, to mind (assuming that these are numbers you use often). If there’s a number you don’t use often, but don’t want to write down or record digitally, then, yes, a mnemonic is a good way to go. But again, you don’t have to get wildly complicated about it. The sort of complex mnemonics that memory champs use are the sort required for very fast encoding of many numbers, words, or phrases. For the occasional number, a few simple tricks suffice.

Shopping lists are another oft-quoted example. Sure, we’ve all forgotten to buy something from the supermarket, but it’s a long way from that problem and the ‘solution’ of complicated mnemonic images and stories. Personally, I find that if I write down what I want from the shop, then that’s all I need to do. Having the list with you is a reassurance, but it’s the act of writing it down that’s the main benefit. But if someone else in the household adds items, then that requires special effort. Similarly, if the items aren’t ‘regular’ ones, then that requires a bit more effort.

I have an atavistic attachment to multiplication tables, but is it really important for anyone to memorize them anymore? A more important skill is that of estimation — where so many people seem to fall down is in not realizing, when they perform a calculation inaccurately, that the answer is unlikely and they’ve probably made an error. More time getting a ‘feel’ for number size would be time better spent.

Does it matter if we can’t remember long poems? Well, I do favor such memorization, but not because failing to remember such things demonstrates “we don’t know how to remember anymore” . I think that memorizing poems or speeches that move us ‘furnishes the mind’, and plays a role in identity and belongingness. But you don’t need , and arguably shouldn’t use, complex mnemonic strategies to memorize them. If you want to ‘have’ them — and it has been argued that it is only by memorizing a text that you can make it truly yours — then you are better spending time with it in a meaningful way. You read it, you re-read it, you think about it, you recite the words aloud because you enjoy the sound of the words, you repeat them to friends because you want to share them, you dwell on them. You have an emotional attachment, and you repeat the words often. And so, they become yours, and you have them ‘in your heart’.

Memorizing a poem you hate because the teacher insists is a different matter entirely! And though you can make the case that children have to be forced to memorize such verse until they realize it’s something they like, I don’t think that’s true. Children ‘naturally’ memorize verse and stories that they like; it’s forced memorization that has engendered any dislike they feel.

Anyway, that’s an argument for another day. Let’s return to the main issue: have we forgotten how to remember?

No.

We remember naturally. We forget naturally too. Both of these are processes that happen to us regardless of our education, of our intelligence, of our tendencies to out-source part of our memory. We have the same instinctive understanding of how to remember that we have always had, and the ability to remember long speeches or sagas is, as it has always been, restricted to those few who want the ability (bards, druids, Roman politicians).

It’s undeniably true that we forget more than our forebears did — but we remember more too. The world’s a different place, and one that puts far greater demands on memory than it ever did. But the answer’s not to pine after a ‘photographic memory’, or the ability to recite the order of a deck of playing cards after seeing them once. For almost all of us, that ability is too hard to come by, and won’t help us with any of the problems we have anyway.

The author of this memoir is reported as saying that the experience taught him “to pay attention to the world around” him, to appreciate the benefits of having a mental repository of facts and texts, to appreciate the role of memory in shaping our experience and identity. These are all worthwhile goals, but you can rest assured that there are better, more enjoyable, ways of achieving them. There are also better ways of improving everyday memory. And perhaps most importantly, better ways of achieving knowledge and expertise in a subject. Mnemonics are an effective strategy for memorizing meaningless and arbitrary information, and they have their place in learning, but they are not the best method for learning meaningful information.

Let me add that by no means am I attacking Joshua Foer’s book, memory championships, or those who participate in them. I’m sure the book is an entertaining and enlightening read; memory championships are fully as worthwhile as any sport championship; those who participate in them have a great hobby. I have merely used this event as a springboard for offering some of my thoughts on the subject.

Here are the links that provoked this post. Two reviews of Joshua Foer’s book:
http://www.theguardian.com/science/2011/mar/13/memory-techniques-joshua…
http://www.nytimes.com/2011/03/08/books/08book.html

An account and a video of a high school team’s winning of the US memory championship (high school division)
http://video.nytimes.com/video/2011/03/09/sports/100000000710149/memory…
http://www.nytimes.com/2011/03/10/sports/10memory.html

Addendum:

After writing this, I discovered another article, this time by Foer himself. He makes a couple of points I’ve made before, but are well worth repeating. Until a few hundred years ago, there were very few copies of any text, and therefore it behooved any scholar, in reading a book, to remember it as well as he could. (In passing, I’d like to note that Foer wins major points with me by quoting Mary Carruthers). Therefore, the whole way readers approached books was very different to how it is for us today, when we value range more than depth. Understandably, when there are so many texts, on so many topics. To constrict ourselves to a few books that we read over and over again is not something we should wish on ourselves. But the price of this is clear; we can all relate to Foer’s comment: “There are books up there [on my bookshelves] that I can’t even remember whether I’ve read or not.”

I was also impressed to learn that he’d taken advice from that expert on expertise, K. Anders Ericsson. And the article has a very good discussion on how to practice, and Ericsson’s work on what he calls deliberate practice (although Foer doesn’t use that name).

Finally, just to reiterate the main point of my post, Foer himself says at the end of this excellent article: “True, what I hoped for before I started hadn’t come to pass: these techniques didn’t improve my underlying memory … Even once I was able to squirrel away more than 30 digits a minute in memory palaces, I seldom memorized the phone numbers of people I actually wanted to call. It was easier to punch them into my cellphone.”

Note that you can also test your memorization abilities with games from the World Memory Championship at http://www.nytimes.com/interactive/2011/02/20/magazine/memory-games.htm

 

Memory is complicated

Recently a “Framework for Success in Postsecondary Writing” came out in the U.S. This framework talked about the importance of inculcating certain “habits of mind” in students. One of these eight habits was metacognition, which they defined as the ability to reflect on one’s own thinking as well as on the individual and cultural processes used to structure knowledge.

The importance of metamemory was emphasized in two recent news items I posted, both dealing with encoding fluency, and the way in which many of us use it to judge how well we’ve learned something, or how likely we are to remember something. The basic point is that we commonly use a fluency heuristic (“it was easy to read/process, therefore it will be easily remembered”) to guide our learning, and yet that is often completely irrelevant.

BUT, not always irrelevant.

In the study discussed in Fluency heuristic is not everyone’s rule, people who believed intelligence is malleable did not use the fluency heuristic. And in one situation this was absolutely the right thing to do, and in the other situation, not so much. Because in that situation, what made the information easy to process did in fact also make it easier to remember.

The point is not that the fluency heuristic is wrong. Nor that it is right. The point is that heuristics (“rules of thumb”) are general guidelines, useful as quick and dirty ways of dealing with things you lack the expertise to deal with better. Heuristics are useful, but they are most useful when you have the knowledge to know when to apply them. The problem is not the use of this heuristic; it is the inflexible use of this heuristic.

Way back, more than ten years ago, I wrote a book called The Memory Key, and in it I said: “The more you understand about how memory works, the more likely you are to benefit from instruction in particular memory skills.” That’s what my books are all about, and that’s what this website is all about.

Learning a “rule” is easy; learning to tell when it’s appropriate to apply it is quite another. My approach to teaching memory strategies is far more complex than the usual descriptions, because learning how to perform a strategy is not particularly helpful on its own. But the reason most memory-improvement books/courses don’t try to do what I do is because explaining how it all works — how memory works, how the strategy works, how it all fits together — is a big task.

But the fact is, learning is a complicated matter. Oh, humans are, truly, great learners. We really do have an amazing memory, when you consider all the things we manage to stuff in there, usually without any great effort or particular intention. But that’s the point, isn’t it? It isn’t about how much you remember. It’s about remembering the things we want to remember.

And to do that, we need to know what makes things hard to remember, or easy to remember. We need to know that this is a question about the things themselves, about the context they’re in, about the way you’re experiencing them, and about the way you relate to them. You can see why this is something that can’t simply be written down in a series of bullet points.

But you don’t have to become a cognitive psychologist either! Expertise comes at different levels. My aim, in my books in particular, and on this website, is to explain as much as is helpful, leaving out most of the minutiae of neuroscience and cognitive theory, trying to find the kernel that is useful at a practical level.

It’s past time I put all these bits together, to describe, for example, exactly when a good mood helps cognition, and when it impairs it; when shifting your focus of attention impairs your performance, and when you need to shift focus to revive your performance; when talking helps, and when it doesn’t; when gesturing helps, and when it doesn’t — you see, there are no hard-and-fast rules about anything. Everything is tempered by task, by circumstance, by individual. So, I will be working on that: the manual for advanced users, you might call it. Let me know if this is something you’d be interested in (the more interest, the more time I’ll spend on it!).

Why asking the right questions is so important, and how to do it

Research; study; learning; solving problems; making decisions — all these, to be done effectively and efficiently, depend on asking the right questions. Much of the time, however, people let others frame the questions, not realizing how much this shapes how they think.

This applies particularly to public debate and communication, even to something that may appear as ‘factual’ as an infographic presenting data. But the data that are presented, and the way they are presented, govern the conclusions you take away, and they depend on the question the designer thought she was supposed to answer, not on the questions you might be interested in. But so much of the time, our thoughts are shaped by the presentation, and we come away having lost sight of our questions.

In research and study, decision-making and problem-solving, the difficulty can be even more insidious, because we ourselves may think we came up with the questions. But asking the right question is crucial, and it should be no surprise that getting it right on the first attempt is not something to be assumed! Moreover, what might be the right question at the beginning of your task may not still be the right question once you’ve acquired more understanding.

In other words, framing questions is not only a first crucial step — it’s also something you need to revisit, repeatedly.

So how do you know if your questions are the most effective ones for your task? How do you test them?

To assess the effectiveness of your questions, you need to be consciously aware of the hierarchy to which they belong. Every question is, explicitly or implicitly, part of a nested set of questions and assumptions. Your task is to make that nesting explicit knowledge.

Here are two examples: an everyday decision-making task, and a learning task.

Because it’s that time of year, let’s look at the common question “Should I go on a diet?” This might be nested in these beliefs (do note I’m simplifying this decision considerably):

  • I’m overweight
  • It’s dangerous to be overweight / Fat is ugly / Other people hate overweight people / I’ll never get that promotion/a job unless I lose weight / I’ll never get a date unless I lose weight …

We’ll ignore the first assumption (“I’m overweight”), because that should be a matter of measurement (although of course it’s not that simple). (I’m also ignoring the issue of whether going on a diet is a good way of losing weight — this is a cognitive exercise, not an advice column!) Let’s instead look at the second set of beliefs. If your question is predicated on the belief that “I’ll never get that promotion/a job unless I lose weight”, then you can see that your question would be better phrased as “Will losing weight improve my chances of getting a job/being promoted?”. This in turn spins off other questions, such as: “How much weight would I need to lose to improve my chances?”; “Is losing weight a better strategy than other strategies that might improve my chances?”; “What other things could I do to improve my chances?”

On the other hand, if your question comes out of a belief that “It’s dangerous to be overweight”, then the question would be better phrased as “Is the amount of excess weight I carry medically dangerous?” — a question that leads to a search of the medical literature, and might end up transforming into: “What are the chances I’ll develop diabetes?”; “What is the most effective thing I can do to reduce my chance of developing diabetes?”

If, however, your question is based on a belief that “Other people hate overweight people”, then you might want to think about why you believe that — is it about societal attitudes that you read about in the media? Is it about the way you think people are looking at you in public? Is it about comments from specific individuals in your life? This can end up quite a deep nesting, leading right down to your beliefs about your self-worth and your relationship with the people in your life.

Let’s look at a learning task: you’ve been asked to write an essay on the causes of the Second World War. This might appear to be a quite straightforward question — but like most apparently straightforward questions, it is an illusion generated by lack of knowledge. The more you know about a subject, the fewer straightforward questions there are!

Any question about causes should make you think of the distinction between proximate causes and deeper causes. The proximate cause of WW2 from the European point of view might be Hitler’s invasion of Sudentenland; for Americans, it might be the Japanese bombing of Pearl Harbor — but these are obviously not the sole cause of the War. There is obviously a long chain of events leading up to the invasion of Sudentenland, and most will date this chain back to the Versailles Treaty, which imposed such harsh penalties on Germany after they lost the First World War. But that, of course, takes us back even further, to the causes of WW1, and so on. Ultimately, you might want to argue that the way civilization rose and developed in ancient Mesopotamia led to the use of war as the principal means of establishing state dominance and power. You might even want to go back further, to primate evolution.

The distinction between proximate and ultimate causes, while useful, is of course a fuzzy one. These are not dichotomous concepts, but ones on a continuum.

All this is a long way of saying that any discussion of causes is always going to be a selected subset of possible causes. It is your (or your teacher’s) decision what subset you choose.

So, given that massive tomes have been written about the causes of WW2, how do you go about writing your comparatively brief essay?

Clearly it depends on the larger goal (we’re back to our nested hierarchy now). Here we must distinguish between two points of view: the instructor’s, and your own.

For example, the instructor might want you to write the essay to show:

  • your grasp of a few essential points covered in class or selected texts
  • your understanding of the complexity of the question
  • your understanding of the nature of historical argument
  • your ability to research a topic
  • your ability to write an essay in a particular format

The tack you take, therefore (if you want good grades!), will depend on what the instructor’s real goal is. It is likely, of course, that the instructor will have more than one goal, but let’s keep it simple, and assume only one.

But the instructor’s purposes aren’t the whole story. Your own goals are important too. As far as you’re concerned, you might be writing the essay:

  • Because the teacher asked for it (and no more)
  • Because you’re interested in the topic
  • Because you want to do well in the class.

Each of these, and the latter two in particular, are only part of the story. Why are you interested in the topic? Because you’re interested in history in general? because you’re interested in war? because a family member was caught up in the events of WW2? Perhaps your interest is in Japan and how it came to that point, or perhaps your interest is in how a society can come to believe that their best interests are served by invading another country.

And these are only some of the possible ways you might be interested. Obviously, there are many many aspects of this very broad question (“What are the causes of WW2?”) that could be discussed. So you need to consider both the instructor’s goals and your own when you re-frame the question in your own words.

Let’s assume that your instructor is interested in your understanding of the complexity of the topic, and you yourself are keen to get good grades although you have no personal interest to shape your approach. How would you frame your initial question?

The simplest question, for the simplest situation, is: What were the causes of WW2 covered in the text?

But if your instructor wants you to reveal your understanding of the complexity of the topic, you’ll probably want to come up with a number of specific questions that can each form the basis for a different paragraph in your essay.

For example:

  • What were the proximate causes of Britain declaring war on Germany?
  • What was the immediate chain of events leading to Germany’s invasion of Poland?
  • What role did the Versailles Treaty play in providing the conditions leading to Germany’s invasion of Poland?
  • What was the immediate chain of events leading to Japan’s invasion of Manchuria?
  • What did the League of Nations do when Japan invaded Manchuria, and how did this affect Germany’s re-occupation of the Rhineland and later invasion of Poland?

Depending on your knowledge of the topic at the beginning, many of those questions may only be revealed once you have answered an earlier question.

If you do, on the other hand, have an interest in a specific aspect of the multiple causes of WW2, you can still satisfy both your teacher’s goals and your own by briefly describing the ‘big picture’ — covering these same questions, but very briefly — and then pulling out one set of questions to answer in more detail, as a demonstration of the complexity of the issue.

Okay, these are bare bones examples (and have still gone on long enough - demonstrating how long it takes when you try and spell out any process!), but hopefully it's enough to show how understanding the questions and assumptions behind the ostensible question helps you frame the right question (and note that questions and assumptions are often just the same thing, framed differently). You can read more about asking questions as a study strategy in my older articles: Asking better questions and Metacognitive questioning and the use of worked examples. I also have a much longer example in my book Effective notetaking, which goes into considerable detail on this subject.

This post has gone on long enough, but let me end by making two last points, to emphasize the importance of asking the right questions. First, the question that starts you off not only shapes your search (for the answer to the problem, or for the right information, or the right decision), it also primes you. Priming is a psychological term that refers to the increased accessibility of related information when a particular item has been retrieved. For example, if you read ‘bread’, you are primed for ‘butter’; if you’ve just remarked on a pastel pink car, you’re more likely to notice other pastel-colored cars.

Second, questions are also an example of another important concept in memory research — the retrieval cue. As I discuss at some length in Perfect Memory Training, your ability to retrieve a memory (‘remember’) depends a lot on the retrieval cue. Retrieval cues (whatever prompts your memory search) are effective to the extent that they set you on the right path to the target memory. For example, the crossword clue “Highest university degree (9 letters)” immediately brought to my mind the answer “doctorate”; I didn’t need any letter clues. On the other hand, the clue “Large marine predator (9 letters)” left me stumped until I generated the right initial letter.

As I say in Perfect Memory Training, when you’re searching for specific information, it’s a good idea to actively generate recall cues (generation strategy), rather than simply rely on a passive association strategy (this makes me think of that, that makes me think of that). Asking questions, and repeatedly revising those questions, is clearly a type of generation strategy, and in some situations it might be helpful to think of it as such.

As in every aspect of improving memory and learning skills, it helps to know exactly what you're doing it and why it works! This is a large topic, but I hope this has helped you understand a little more about the value of asking questions, and how to do it in a way that is most effective.

Why it’s important to work out the specific skills you want to improve

I have spoken before, here on the website and in my books, about the importance of setting specific goals and articulating your specific needs. Improving your memory is not a single task, because memory is not a single thing. And as I have discussed when talking about the benefits of ‘brain games’ and ‘brain training’, which are so popular now, there is only a little evidence that we can achieve general across-the-board improvement in our cognitive abilities. But we can improve specific skills, and we may be able to improve a range of skills by working on a more fundamental skill they all share.

The modularity of the brain is emphasized in a recent study that found the two factors now thought to be operating in working memory capacity are completely independent of each other. Working memory capacity has long been known to be strongly correlated with intelligence, but the recent discovery that people vary not only in the number of items they can hold in short-term memory but also in how clear and precise the items are, has changed our conception of working memory capacity.

Both are measures of information; the clarity (resolution) of the items in working memory essentially reflects how much information about each item the individual can hold. So should our measures of WMC somehow encapsulate both factors? Are they related? It would seem plausible that those who can hold more items might hold less information about each of them; that those who can only hold two or three items might hold far more information on each item.

But this new study finds no evidence for that. Apparently the two factors are completely independent. Moreover, the connection between WMC and intelligence seems only to apply to the number of items, not to their clarity.

Working memory is fundamental to our cognitive abilities — to memory, to comprehension, to learning, to reasoning. And yet even this very basic process (basic in the sense of ‘at the base of everything’, not in the sense of primitive!) is now seen to break down further, into two quite separate abilities. And while clarity may have nothing to do with intelligence, it assuredly has something to do with abilities such as visual imagery, search, discrimination.

It may be clarity is more important to you than number of items. It depends on what skills are important to you. And the skills that are important to you change as your life circumstances change. When you’re young, you want as broad a base of skills as possible, but as you age, you are better to become more selective.

Many people die with brains that show all the characteristics of Alzheimer’s, and yet they showed no signs of that in life. The reason is that they had sufficient ‘cognitive reserve’ —a brain sufficiently well and strongly connected — that they could afford (for long enough) the losses the disease created in their brain. This doesn’t mean they wouldn’t have eventually succumbed to the inevitable, of course, if they had lived longer. But a long enough delay can essentially mean the disease has been prevented.

One of the best ways to fight cognitive decline and dementia is to build your brain up in the skills and domains that are, and will be, important to you. And while this can, and should, involve practicing and learning better strategies for specific skills, it is also a good idea to work on more fundamental skills. Knowing which fundamental skills underlie the specific skills you’re interested in would enable you to direct your attention appropriately.

Thus it may be that while increasing the number of items you can hold in short-term memory might help you solve mathematical problems, remember phone numbers, or understand complex prose, trying to improve your ability to visualize objects clearly might help you remember people’s faces, or where you left your car, or use mnemonic strategies.

Variety is the key to learning

On a number of occasions I have reported on studies showing that people with expertise in a specific area show larger gray matter volume in relevant areas of the brain. Thus London taxi drivers (who are required to master “The Knowledge” — all the ways and byways of London) have been found to have an increased volume of gray matter in the anterior hippocampus (involved in spatial navigation). Musicians have greater gray matter volume in Broca’s area.

Other research has found that gray matter increases in specific areas can develop surprisingly quickly. For example, when 19 adults learned to match made-up names against four similar shades of green and blue in five 20-minute sessions over three days, the areas of the brain involved in color vision and perception increased significantly.

This is unusually fast, mind you. Previous research has pointed to the need for training to extend over several weeks. The speed with which these changes were achieved may be because of the type of learning — that of new categories — or because of the training method used. In the first two sessions, participants heard each new word as they regarded the relevant color; had to give the name on seeing the color; had to respond appropriately when a color and name were presented together. In the next three sessions, they continued with the naming and matching tasks. In both cases, immediate feedback was always given.

But how quickly brain regions may re-organize themselves to optimize learning of a specific skill is not the point I want to make here. Some new research suggests our ideas of cortical plasticity need to be tweaked.

In my book on note-taking, I commented on how emphasis of some details (for example by highlighting) improves memory for those details but reduces memory of other details. In the same way, increase of one small region of the brain is at the expense of others. If we have to grow an area for each new skill, how do we keep up our old skills, whose areas might be shrinking to make up for it?

A rat study suggests the answer. While substantial expertise (such as our London cab-drivers and our professional musicians) is apparently underpinned by permanent regional increase, the mere learning of a new skill does not, it seems, require the increase to endure. When rats were trained on an auditory discrimination task, relevant sub-areas of the auditory cortex grew in response to the new discrimination. However, after 35 days the changes had disappeared — but the rats retained their new perceptual abilities.

What’s particularly interesting about this is what the finding tells us about the process of learning. It appears that the expansion of bits of the cortex is not the point of the process; rather it is a means of generating a large and varied set of neurons that are responsive to newly relevant stimuli, from which the most effective circuit can be selected.

It’s a culling process.

This is the same as what happens with children. When they’re young, neurons grow with dizzying profligacy. As they get older, these are pruned. Gone are the neurons that would allow them to speak French with a perfect accent (assuming French isn’t a language in their environment); gone are the neurons that would allow them to finely discriminate the faces of races other than those around them. They’ve had their chance. The environment has been tested; the needs have been winnowed; the paths have been chosen.

In other words, the answer’s not: “more” (neurons/connections); the answer is “best” (neurons/connections). What’s most relevant; what’s needed; what’s the most efficient use of resources.

This process of throwing out lots of trials and seeing what wins, echoes other findings related to successful learning. We learn a skill best by varying our practice in many small ways. We learn best from our failures, not our successes — after all, a success is a stopper. If you succeed without sufficient failure, how will you properly understand why you succeeded? How will you know there aren’t better ways of succeeding? How will you cope with changes in the situation and task?

Mathematics is an area in which this process is perhaps particularly evident. As a student or teacher, you have almost certainly come across a problem that you or the student couldn’t understand when expressed in one way, and maybe several different ways. Until, at some point, for no clear reason, understanding ‘clicks’. And it’s not necessarily that this last way of expressing / representing it is the ‘right’ one — if it had been presented first, it may not have had that effect. The effect is cumulative — the result of trying several different paths and picking something useful from each of them.

In a recent news item I reported on a finding that people who learned new sequences more quickly in later sessions were those whose brains had displayed more 'flexibility' in the earlier sessions — that is, different areas of the brain linked with different regions at different times. And most recently, I reported on a finding that training on a task that challenged working memory increased fluid intelligence in those who improved at the working memory task. But not everyone did. Those who improved were those who found the task challenging but not overwhelming.

Is it too much of a leap to surmise that this response goes hand in hand with flexible processing, with strategizing? Is this what the ‘sweet spot’ in learning really reflects — a level of challenge and enjoyability that stimulates many slightly different attempts? We say ‘Variety is the spice of life’. Perhaps we should add: ‘Variety is the key to learning’.

How to Revise and Practice

References

Kwok, V., Niu Z., Kay P., Zhou K., Mo L., Jin Z., et al. (2011). Learning new color names produces rapid increase in gray matter in the intact adult human cortex. Proceedings of the National Academy of Sciences.

The most effective learning balances same and different context

I recently reported on a finding that memories are stronger when the pattern of brain activity is more closely matched on each repetition, a finding that might appear to challenge the long-standing belief that it’s better to learn in different contexts. Because these two theories are very important for effective learning and remembering, I want to talk more about this question of encoding variability, and how both theories can be true.

First of all, let’s quickly recap the relevant basic principles of learning and memory (I discuss these in much more detail in my books The Memory Key, now out-of-print but available from my store as a digital download, and its revised version Perfect Memory Training, available from Amazon and elsewhere):

network principle: memory consists of links between associated codes

domino principle: the activation of one code triggers connected codes

recency effect: a recently retrieved code will be more easily found

priming effect: a code will be more easily found if linked codes have just been retrieved

frequency (or repetition) effect: the more often a code has been retrieved, the easier it becomes to find

spacing effect: repetition is more effective if repetitions are separated from each other by other pieces of information, with increasing advantage at greater intervals.

matching effect: a code will be more easily found the more the retrieval cue matches the code

context effect: a code will be more easily found if the encoding and retrieval contexts match

Memory is about two processes: encoding (the way you shape the memory when you put it in your database, which includes the connections you make with other memory codes already there) and retrieving (how easy it is to find in your database). So making a ‘good’ memory (one that is easily retrieved) is about forming a code that has easily activated connections.

The recency and priming effects remind us that it’s much easier to follow a memory trace (by which I mean the path to it as well as the code itself) that has been activated recently, but that’s not a durable strength. Making a memory trace more enduringly stronger requires repetition (the frequency effect). This is about neurobiology: every time neurons fire in a particular sequence, it makes it a little easier for it to fire in that way again.

Now the spacing effect (which is well-attested in the research) seems at odds with this most recent finding, but clearly the finding is experimental evidence of the matching and context effects. Context at the time of encoding affects the memory trace in two ways, one direct and one indirect. It may be encoded with the information, thus providing additional retrieval cues, and it may influence the meaning placed on the information, thus affecting the code itself.

It is therefore not at all surprising that the closer the contexts, the closer the match between what was encoded and what you’re looking for, the more likely you are to remember. The thing to remember is that the spacing effect does not say that it makes the memory trace stronger. In fact, most of the benefit of spacing occurs with as little as two intervening items between repetitions — probably because you’re not going to benefit from repeating a pattern of activation if you don’t give the neurons time to reset themselves.

But repeating the information at increasing intervals does produce better learning, measured by your ability to easily retrieve the information after a long period of time (see my article on …), and it does this (it is thought) not because the memory trace is stronger, but because the variations in context have given you more paths to the code.

This is the important thing about retrieving: it’s not simply about having a strong path to the memory. It’s about getting to that memory any way you can.

Let’s put it this way. You’re at the edge of a jungle. From where you stand, you can see several paths into the dense undergrowth. Some of the paths are well-beaten down; others are not. Some paths are closer to you; others are not. So which path do you choose? The most heavily trodden? Or the closest?

If the closest is the most heavily trodden, then the choice is easy. But if it’s not, you have to weigh up the quality of the paths against their distance from you. You may or may not choose correctly.

I hope the analogy is clear. The strength of the memory trace is the width and smoothness of the path. The distance from you reflects the degree to which the retrieval context (where you are now) matches the encoding context (where you were when you first input the information). If they match exactly, the path will be right there at your feet, and you won’t even bother looking around at the other options. But the more time has passed since you encoded the information, the less chance there is that the contexts will match. However, if you have many different paths that lead to the same information, your chances of being close to one of them obviously increases.

In other words, yes, the closer the match between encoding and retrieval context, the easier it will be to remember (retrieve) the information. And the more different contexts you have encoded with the information, the more likely it is that one of those contexts will match your current retrieval context.

A concrete example might help. I’ve been using a spaced retrieval program to learn the basic 2200-odd Chinese characters. It’s an excellent program, and groups similar-looking characters together to help you learn to distinguish them. I am very aware that every time a character is presented, it appears after another character, which may or may not be the same one it appeared after on an earlier occasion. The character that appeared before provides part of the context for the new character. How well I remember it depends in part on how often I have seen it in that same context.

I would ‘learn’ them more easily if they always appeared in the same order, in that the memory trace would be stronger, and I would more easily and reliably recall them on each occasion. However in the long-term, the experience would be disadvantageous, because as soon as I saw a character in a different context I would be much less likely to recall it. I can observe this process as I master these characters — with each different retrieval context, my perception of the character deepens as I focus attention on different aspects of it.