Skip to main content

self-regulation

includes metamemory (your knowledge of how your memory works), self-monitoring, goal-setting, time management, etc

Desirable difficulty for effective learning

When we are presented with new information, we try and connect it to information we already hold. This is automatic. Sometimes the information fits in easily; other times the fit is more difficult — perhaps because some of our old information is wrong, or perhaps because we lack some of the knowledge we need to fit them together.

When we're confronted by contradictory information, our first reaction is usually surprise. But if the surprise continues, with the contradictions perhaps increasing, or at any rate becoming no closer to being resolved, then our emotional reaction turns to confusion.

Confusion is very common in the learning process, despite most educators thinking that effective teaching is all about minimizing, if not eliminating, confusion.

But recent research has suggested that confusion is not necessarily a bad thing. Indeed, in some circumstances, it may be desirable.

I see this as an example of the broader notion of ‘desirable difficulty’, which is the subject of my current post. But let’s look first at this recent study on confusion for learning.

In the study, students engaged in ‘trialogues’ involving themselves and two animated agents. The trialogues discussed possible flaws in a scientific study, and the animated agents took the roles of a tutor and a student peer. To get the student thinking about what makes a good scientific study, the agents disagreed with each other on certain points, and the student had to decide who was right. On some occasions, the agents made incorrect or contradictory statements about the study.

In the first experiment, involving 64 students, there were four opportunities for contradictions during the discussion of each research study. Because the overall levels of student confusion were quite low, a second experiment, involving 76 students, used a delayed manipulation, where the animated agents initially agreed with each other but eventually started to express divergent views. In this condition, students were sometimes then given a text to read to help them resolve their confusion. It was thought that, given their confusion, students would read the text with particular attention, and so improve their learning.

In both experiments, on those trials which genuinely confused the students, those students who were initially confused by the contradiction between the two agents did significantly better on the test at the end.

A side-note: self-reports of confusion were not very sensitive, and students’ responses to forced-choice questions following the contradictions were more sensitive at inferring confusion. This is a reminder that students are not necessarily good judges of their own confusion!

The idea behind all this is that, when there’s a mismatch between new information and prior knowledge, we have to explore the contradictions more deeply — make an effort to explain the contradictions. Such deeper processing should result in more durable and accessible memory codes.

Such a mismatch can occur in many, quite diverse contexts — not simply in the study situation. For example, unexpected feedback, anomalous events, obstacles to goals, or interruptions of familiar action sequences, all create some sort of mismatch between incoming information and prior knowledge.

However, all instances of confusion aren’t necessarily useful for learning and memory. They need to be relevant to the activity, and of course the individual needs to have the means to resolve the confusion.

As I said, I see a relationship between this idea of the right level and type of confusion enhancing learning, and the idea of desirable difficulty. I’ve talked before about the ‘desirable difficulty’ effect (see, for example, Using 'hard to read' fonts may help you remember more). Both of these ideas, of course, connect to a much older and more fundamental idea: that of levels of processing. The idea that we can process information at varying levels, and that deeper levels of processing improve memory and learning, dates back to a paper written in 1972 by Craik and Lockhart (although it has been developed and modified over the years), and underpins (usually implicitly) much educational thinking.

But it’s not so much this fundamental notion that deeper processing helps memory and learning, and certain desirable difficulties encourage deeper processing, that interests me as much as idea of getting the level right.

Too much confusion is usually counter-productive; too much difficulty the same.

Getting the difficulty level right is something I have talked about in connection with flow. On the face of it, confusion would seem to be counterproductive for achieving flow, and yet ... it rather depends on the level of confusion, don't you think? If the student has clear paths to follow to resolve the confusion, the information flow doesn't need to stop.

This idea also, perhaps, has connections to effective practice principles — specifically, what I call the ‘Just-in-time rule’. This is the principle that the optimal spacing for your retrieval practice depends on you retrieving the information just before you would have forgotten it. (That’s not as occult as it sounds! But I’m not here to discuss that today.)

It seems to me that another way of thinking about this is that you want to find that moment when retrieval of that information is at the ‘right’ level of difficulty — neither too easy, nor too hard.

Successful teaching is about shaping the information flow so that the student experiences it — moment by moment — at the right level of difficulty. This is, of course, impossible in a factory-model classroom, but the mechanics of tailoring the information flow to the individual are now made possible by technology.

But technology isn't the answer on its own. To achieve optimal results, it helps if the individual student is aware that the success of their learning depends on (or will at least be more effective — for some will be successful regardless of the inadequacy of the instruction) managing the information flow. Which means they need to provide honest feedback, they need to be able to monitor their learning and recognize when they have ‘got’ something and when they haven’t, and they need to understand that if one approach to a subject isn’t working for them, then they need to try a different one.

Perhaps this provides a different perspective for some of you. I'd love to hear of any thoughts or experiences teachers and students have had that bear on these issues.

References

D’Mello, S., Lehman B., Pekrun R., & Graesser A. (Submitted). Confusion can be beneficial for learning. Learning and Instruction.

Social factors impact academic achievement

A brief round-up of a few of the latest findings reinforcing the fact that academic achievement is not all about academic ability or skills.Most of these relate to the importance of social factors.

Improving social belonging improves GPA, well-being & health, in African-American students

From Stanford, we have a reminder of the effects of stereotype threat, and an interesting intervention that ameliorated it. The study involved 92 freshmen, of whom 49 were African-American, and the rest white. Half the participants (none of whom were told the true purpose of the exercise) read surveys and essays written by upperclassmen of different ethnicities describing the difficulties they had fitting in during their first year at school. The other subjects read about experiences unrelated to a sense of belonging. The treatment subjects were then asked to write essays about why they thought the older college students' experiences changed, with illustrations from their own lives, and then to rewrite their essays into speeches that would be videotaped and could be shown to future students.

The idea of this intervention was to get the students to realize that everyone, regardless of race, has difficulty adjusting to college, and has times when they feel alienated or rejected.

While this exercise had no apparent effect on the white students, it had a significant impact on the grades and health of the black students. Grade point averages went up by almost a third of a grade between their sophomore and senior years, and 22% of them landed in the top 25% of their graduating class, compared to about 5% of black students who didn't participate in the exercise.

Moreover, the black students in the treatment group reported a greater sense of belonging compared to their peers in the control group; they were happier, less likely to spontaneously think about negative racial stereotypes, and apparently healthier (3 years after the intervention, 28% had visited a doctor recently, vs 60% in the control group).

Source: http://news.stanford.edu/news/2011/march/improve-minority-grades-031711…

Protecting against gender stereotype threat Stereotype threat is a potential factor for gender as well as ethnicity.

I’ve reported on a number of studies showing that reminding women or girls of gender stereotypes in math results in poorer performance on subsequent math tests. A new study suggests that women could be “inoculated” against such effects if their math / science class is taught by a woman. Although in these experiments, women’s academic performance didn’t suffer, their engagement and commitment to their STEM major was significantly affected.

In the first study, 72 women majoring in STEM subjects were given several tests measuring their implicit and explicit attitudes towards math vs English, plus a short but difficult math test. Half the students were (individually) tested by a female peer expert, supposedly double majoring in math and psychology, and half by a male peer. Those with a male showed negative implicit attitudes towards math, while those tested by a female showed equal liking for math and English on an implicit attitudes test. Similarly, women implicitly identified more with math in the presence of the female expert. On the math test, women who met the female attempted more problems (an average of 7.73 out of 10 compared to 6.39). There was no effect on performance — but because of the difficulty of the test, there was a floor effect.

In the second study, 101 women majoring in engineering were given short biographies of 5 engineers, who were either male or female, or descriptions of engineering innovations (control condition). Again, women presented with female engineers showed equal preference for math and English in the subsequent implicit attitudes test, while those presented with male engineers or innovations showed a significant implicit negative attitude to math. However, implicit identification with math wasn’t any stronger after reading about female engineers. However, those who read about female engineers did report greater intentions to pursue an engineering career, and this was mediated by greater self-efficacy in engineering. Again, there was no effect on explicit attitudes toward math.

In the third study, the performance of 42 female and 49 male students in introductory calculus course sections taught by male (8 sections) and female instructors (7 sections) were compared. Professors were yoked to same-sex teaching assistants.

As with the earlier studies, female students implicitly liked math and English equally when the teacher was a women, but had a decidedly more negative attitude toward math when their instructor was a man. Male students were unaffected by teacher gender. Similarly, female showed greater implicit identification with math when their teacher was a woman; male students were unaffected. Female students also expected better grades when their teacher was a woman; male students didn’t differ as a function of teacher gender (it should be noted that this wasn’t because they thought the women would be more generous markers; marking was pooled across all the instructors, and the students knew this). There was no effect of teacher gender on final grade (but there was a main effect of student gender: women outperformed men).

In other words, the findings of the 3rd study confirmed the effects on implicit attitudes towards STEM subjects, and demonstrated that male students were unaffected by the interventions that affected female students.

Now we come to engagement. At the beginning of the semester, female students were much less likely than male students (9% vs. 23%) to respond to questions put to the class, but later on, female students in sections led by women were much more likely to respond to such questions than were women in courses taught by men (46% vs 7%). Interestingly, more male students also responded to questions posed by female instructors (42% vs 26%). That would seem to suggest that male instructors are much more likely to engage in strategies that discourage many students from engaging in the class. But undeniably, women are more affected by this.

Additionally, at the beginning of the courses, around the same number of female students approached their instructors, regardless of their gender (12-13%). But later, while this percentage of female students approaching female instructors stayed constant, none of them approached male instructors. This could be taken to mean male instructors consistently discouraged such behavior, but male students did not change (an average of 7% both at Time 1 and Time 2).

The number of students who asked questions in class did not vary over time, or by student gender. However it did vary by teacher gender: 22% of both male and female students asked questions in class when they were taught by women, while only 15% did so in courses taught by men.

Some of these effects then seem to indicate that male college instructors are more inclined to discourage student engagement. What the effects of that are, remains to be seen.

Source: http://www.insidehighered.com/news/2011/03/03/study_suggests_role_of_ro…

Social and emotional learning programs found to boost student improvement

A review of 213 school programs that enhance students' social and emotional development, has found that such programs not only significantly improved social and emotional skills, caring attitudes, and positive social behaviors, but also resulted in significant improvement on achievement tests (although only a small subset of these programs actually looked at this aspect, the numbers of students involved were very large).

The average improvement in grades and standardized-test scores was 11 percentile points —an improvement that falls within the range of effectiveness of academic interventions.

Source: http://www.physorg.com/news/2011-02-social-emotional-boost-students-ski…

http://www.edweek.org/ew/articles/2011/02/04/20sel.h30.html

Boys need close friendships

Related to this perhaps (I looked but couldn’t find any gender numbers for the SEL programs), from the Celebration of Teaching and Learning Conference in New York, developmental psychologist Niobe Way argues that one reason why boys are struggling in school is that they are experiencing a "crisis of connection." Stereotypical notions of masculinity, that emphasize separation and independence, challenge their need for close friendships. She's found that many boys have close friendships that are being discouraged by anxiety about being seen as gay or effeminate.

Way says that having close friendships is linked to better physical and mental health, lower rates of drug use and gang membership, and higher levels of academic achievement and engagement. When asked, she encouraged teachers to allow boys to sit next to their best friends in class.

Source: http://blogs.edweek.org/teachers/teaching_now/2011/03/psychologist_boys…

High rate of college students with unrecognized hearing loss

On a completely different note, a study involving 56 college students has found that fully a quarter of them showed 15 decibels or more of hearing loss at one or more test frequencies — an amount that is not severe enough to require a hearing aid, but could disrupt learning. The highest levels of high frequency hearing loss were in male students who reported using personal music players.

Source: http://www.physorg.com/news/2011-03-college-students.html

References

Walton, G. M., & Cohen G. L. (2011). A Brief Social-Belonging Intervention Improves Academic and Health Outcomes of Minority Students. Science. 331(6023), 1447 - 1451.

Stout, J. G., Dasgupta N., Hunsinger M., & McManus M. A. (2011). STEMing the tide: using ingroup experts to inoculate women's self-concept in science, technology, engineering, and mathematics (STEM). Journal of Personality and Social Psychology. 100(2), 255 - 270.

Durlak, J. A., Weissberg R. P., Dymnicki A. B., Taylor R. D., & Schellinger K. B. (2011). The Impact of Enhancing Students’ Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions. Child Development. 82(1), 405 - 432.

Le Prell, C. G., Hensley B. N., Campbell K. C. M., Hall J. W., & Guire K. (2011). Evidence of hearing loss in a ‘normally-hearing’ college-student population. International Journal of Audiology. 50(S1), S21-S31 - S21-S31.

Maybe it has nothing to do with self-control

A Scientific American article talks about a finding that refines a widely-reported association between self-regulation and academic achievement. This association relates to the famous ‘marshmallow test’, in which young children were left alone with a marshmallow, having been told that if they could hold off eating it until the researcher returns, they would get two marshmallows. The ability of the young pre-school children to wait has been linked to subsequent achievement at school, and indeed has been said to be as important as IQ.

The finding relates to other factors that might be involved in a child’s decision not to wait — specifically, children who live in an environment where anything they had could be taken away at any time, make a completely rational choice by not waiting.

Another recent study makes a wider point: the children in the classical paradigm don’t know how long they will have to wait. This, the researchers say, changes everything.

A survey of adults asked to imagine themselves in a variety of scenarios, in which they were told the amount of time they had been at an activity such as watching a movie, practicing the piano, or trying to lose weight, were asked how long they thought it would be until they reached their goal or the end. There were marked differences in responses depending on whether the scenario had a relatively well-defined length or was more ambiguous.

Now, this in itself is no surprise. What is a surprise is that, rather than the usual feeling that the longer you’ve waited the closer you are to the end, when you don’t know anything about when the outcome will occur, the reverse occurs: the longer you wait the more you think you’re getting farther and farther away from that outcome.

The researchers suggest that this changes the interpretation of the marshmallow test — not in terms of predicting ability to delay gratification, but in terms of the mechanism behind it. Rather than reflecting two opposing systems fighting it out (your passionate id at war with your calculating super-ego), waiting for a while then giving in may be perfectly rational behavior. It may not be about ‘running out’ of will-power at all.

According to this model, which fits the observed behavior, and which I have to say makes perfect sense to me, there are three factors that influence persistence:

  • beliefs about time — which in this context has to do with how the predicted delay changes over time, i.e., do you believe that the remaining length of time is likely to be the same, shorter, or longer;
  • perceived reward magnitude — how much more valuable the delayed reward is to you than the immediate reward;
  • temporal discount rate — how much shorter time is valued.

A crucial point about temporal beliefs is that they can change as time passes. So, if you’re waiting for a bus, then the reasonable thing to believe is that, the longer you wait, the less time you will have left to wait. But what about if you’re waiting at a stop very late at night? In that case, the longer you wait, the more certain you might become that a bus will not in fact be coming for many hours. How about when you text someone? You probably start off expecting a reply right away, but the longer you wait the longer you expect to wait (if they’re not answering right away, it might be hours; they might not even see your text at all).

Another important aspect of these factors is that they are subjective (especially the last two), and will vary with an individual. This places ‘failures’ on differences in an individual’s temporal discount rate and perceived reward magnitude, rather than on poor self-control.

But what about the evidence that performance on this test correlates with later academic achievement? Well, temporal discount rate also appears to show ‘trait-like stability over time’, and has also been found to correlate with cognitive ability. Temporal discount rate, it seems to me, has a clear connection to motivation, and I have talked before about the way motivation can make a significant impact to someone’s IQ score or exam performance.

So maybe we should move away from worries about ‘self-control’, and start thinking about why some people put a higher value on short waiting times than others — how much of this is due to early experiences? what can we do about it?

We also need to think very hard about the common belief that persistence is always a virtue. If you’re waiting for a bus that hasn’t come after an hour, and it’s now one in the morning, your best choice is probably to give up and find some other means home.

Although persistence is often regarded as a virtue, misguided persistence can waste time and resources and can therefore defeat one's chances of success at superordinate goals . . . Rather than assuming that persistence is generally adaptive, the issue should be conceptualized as making judgments about when persistence will be effective and when it will be useless or even self-defeating. (Baumeister & Scher, 1988, pp. 12–13)

All of which is to say that, as with all human behavior, persistence (sometimes equated to ‘will-power’; sometimes to 'self-regulation') is a product of both the individual and the environment. If some children are doing well and others are not, perhaps you shouldn’t be attributing this to stable traits of the children, but to the way different children perceive the situation.

Nor is it only in the academic environment that these things matter. Our ability to delay gratification and our motivation are attributes that underlie our behavior and our success across our lives. If we turn these ‘attributes’ around and, instead of seeing them as personal traits, rather see them as dynamic attributes that reflect situational factors that interact with personal attributes, then we have a better chance of getting the results we want. If we can pinpoint perceived reward and temporal discount rate as critical factors in this individual — environment interaction, we know exactly what variables to consider and manipulate.

We are built to like simple solutions — a number, a label that we can pin on ourselves or another — but surely we have become sufficiently sophisticated that we can now handle more complex information? We need to move from considering people, whether ourselves or others, as independent agents acting in a vacuum, to considering them as part of an indissoluble organism — environment interacting unit. Let’s get away from a fixation on IQ scores, or SAT scores, or even complex multi-factorial scores, and realize those, even the most predictive ones, are only ever one part of the story. No one is the same person at every moment, and it’s time we took that point more seriously.

References

McGuire, J. T., & Kable, J. W. (2013). Rational Temporal Predictions Can Underlie Apparent Failures to Delay Gratification. Psychological Review, 120(2), 395–410. doi:10.1037/a0031910

Baumeister, R. F., & Scher, S. J. (1988). Self-defeating behavior patterns among normal individuals: Review and analysis of common self-destructive tendencies. Psychological Bulletin, 104, 3–22. doi:10.1037/ 0033-2909.104.1.3

Intelligence isn’t as important as you think

Our society gives a lot of weight to intelligence. Academics may have been arguing for a hundred years over what, exactly, intelligence is, but ‘everyone knows’ what it means to be smart, and who is smart and who is not — right?

Of course, it’s not that simple, and the ins and outs of academic research have much to teach us about the nature of intelligence and its importance, even if they still haven’t got it all totally sorted yet. Today I want to talk about one particular aspect: how important intelligence is in academic success.

First of all, to simplify the discussion, let’s start by pretending that intelligence equals “g” and is measured by IQ testing. (“g” stands for “general factor”, and reflects the shared element between multiple cognitive tests. It is a product of a statistical technique known as factor analysis, which measures the inter-correlation between scores on various cognitive tasks. It is no surprise to any of us that cognitive tasks should be correlated — that people who do well on one task are likely to do well on others, while people who do poorly on one are likely to perform poorly on others. No surprise, either, that some cognitive tasks will be more highly correlated than others. But here’s the thing: the g factor, while it explains a lot of the individual differences in performance on an IQ test, accounts for performance on some of the component sub-tests better than others. In other words, g is more important for some cognitive tasks than others. Again, not terribly unexpected. Some tasks are going to require more ‘intelligence’ than others. One way of describing these tasks is to say that they are cognitively more complex. In the context of the IQ test, the sub-tests each have a different “g-loading”.)  

Now there is no doubting that IQ is a good predictor of academic performance, but what does that mean exactly? How good is ‘good’? Well, according to Flynn, IQ tests that are heavily-loaded on g reliably predict about 25% of the variance in academic achievement (note that this is about variance, that is the differences between people; this is not the same as saying that IQ accounts for a quarter of academic performance). But this does vary significantly depending on age and population — for example, in a group of graduate students, the relative importance of other factors will be greater than it is in a cross-section of ten-year-olds. In the study I will discuss later, the figure cited is closer to 17%.

Regardless of whether it’s as much as 25% or as little as 17%, I would have thought that these figures are much smaller than most people would imagine, given the weight that we give to intelligence.

So what are the other factors behind doing well at school (and, later, at work)?

The most obvious one is effort. One way to measure how hard people work is through the personality dimension of Conscientiousness.

One study involving 247 British university students compared the predictive power of the “Big Five” personality traits (Neuroticism, Extraversion, Openness to Experience, Agreeableness, Conscientiousness) on later exam performance, and found that Conscientiousness had a significant effect, and was the only trait to have a significantly positive effect. Illuminatingly, of Conscientiousness’s components (Competence, Order, Dutifulness, Achievement striving, Self-discipline, Deliberation), only Dutifulness, Achievement striving, and (to a lesser extent), Self-discipline, had significant effects.

There were also, smaller and less reliable, negative effects of Neuroticism and Extraversion. The problems here came mainly from Anxiety and Impulsiveness, and Gregariousness and Activity.

Overall, Dutifulness, Achievement striving, and Activity, accounted for 28% of the variance in overall exam grades (over the three years of their undergraduate degrees).

But note that these students were highly selected — undergraduates were (at this point in time) accepted to the University College London at an application: acceptance ratio of 12:1 — so IQ is going to be less important as a source of individual difference.

In another study by some of the same researchers, 80 sixth-formers (equivalent to grade 10) were given both personality and intelligence tests. Conscientiousness and Openness to Experience were found to account for 13% of unique variance in academic performance, and intelligence for 10%. Interestingly, there were subject differences. Intelligence was more important than personality for science subjects (including math), while the reverse was true for English language (literature, language) subjects.

The so-called Big Five personality dimensions are well-established, but recently a new model has introduced a sixth dimension: Honesty-Humility. Unexpectedly (to me at least), a recent study showed this dimension also has some implications for academic performance.

The first experiment in this study involved 226 undergraduate students from a School of Higher Education in the Netherlands. Both Conscientiousness and Honesty-Humility were significantly and positively correlated to grade point average (with Conscientiousness having the greater effect). All the components of Conscientiousness (in this model, Organization, Diligence, Perfectionism, Prudence) were significantly related to GPA. Three of the four components of Honesty-Humility (Greed Avoidance, Modesty, Fairness) were significantly related to GPA (in that order of magnitude).

In the second experiment, a wider data-set was used. 1262 students from the same school were given the Multicultural Personality Test—Big Six, which measures Emotional Stability, Conscientiousness, Extraversion, Agreeableness, Openness, and Integrity (a similar construct to Honesty-Humility, involving the facets Honesty, Sincerity, Greed Avoidance). Again, Conscientiousness and Integrity showed significant and positive correlations to GPA. In this case, Conscientiousness was divided into Need for Rules and Certainty, Orderliness, Perseverance, and Achievement Motivation — all of which were separately significant predictors of GPA. For Integrity, Greed Avoidance produced the largest effect, with Honesty being a smaller effect but still highly significant, while Sincerity was of more marginal significance.

In summary, personality traits such as Diligence, Achievement Motivation, Need for Rules and Certainty, Greed Avoidance, and Modesty, were the traits most strongly associated with academic performance.

Of course, one flaw in personality tests is that they rely on self-reports. A much-discussed longitudinal study of eighth-graders found that self-discipline accounted for more than twice as much variance as IQ in final grades. Moreover, self-discipline also predicted which students would improve their grades over the course of the year, which IQ didn’t.

Again, however, it should be noted that this is a selected group — the students came from a magnet public school in which students were admitted on the basis of their grades and test scores.

This study measured self-discipline not only by self-report, but also by parent report, teacher report, monetary choice questionnaires (in an initial experiment involving 140 students), a behavioral delay-of-gratification task, a questionnaire on study habits, (in a replication involving 164 students).

One personality trait that many have thought should be a factor in academic achievement is Openness to Experience, and indeed, in some experiments it has been so. It may be that Openness to Experience, which includes Fantasy (vivid imagination), Aesthetic Sensitivity, Attentiveness to Inner Feelings, Actions (engagement in novel activities), Ideas, and Values (readiness to reexamine traditional values), is associated with higher intelligence but not necessarily academic success (depending perhaps on subject?).

It may also be that, as with Neuroticism, Extraversion, and Conscientiousness, only some (or even one) of the component traits is relevant to academic performance. The obvious candidate is Ideas, described as the tendency to be intellectually curious and open to new ideas. Supporting this notion, recent research provides evidence that Openness incorporates two related but distinct factors: Intellect (Ideas) and Openness (artistic and contemplative qualities, embodied in Fantasy, Aesthetics, Feelings, and Actions), with Values a distinct marker belonging to neither camp.

A recent meta-analysis, gathering data from studies that have employed the Typical Intellectual Engagement (TIE) scale (as a widely-used proxy for intellectual curiosity), has found that curiosity had as large an effect on academic performance as conscientiousness, and together, conscientiousness and curiosity had as big an effect on performance as intelligence.

Of course, while research has shown (not unexpectedly) that Conscientiousness and Intelligence are quite independent, the correlation between Intelligence and Curiosity is surely significant. In fact, this study found a significant correlation between both TIE and Intelligence, and TIE and Conscientiousness. Nevertheless, the best-fit model indicated that all three factors were direct predictors of academic performance.

More to the point, these three important attributes all together still accounted for only a quarter of the variance in academic performance.

Regardless of the precise numbers (this area of study depends on complex statistical techniques, and I wouldn’t want to rest any case on any specific figure!), it is clear from the wealth of research (which I have barely touched on), that although intelligence is an important attribute in determining success in the classroom and in employment, it is only one among a number of important attributes. And so is Diligence. Perhaps we should spend less time praising intelligence and hard work, and more time encouraging engagement and curiosity, and a disinterest in luxury goods or a high social status.

 

Read more about the curiosity study at https://medicalxpress.com/news/2011-10-curiosity-doesnt-student.html

References

Chamorro-Premuzic, T., & Furnham, A. (2003). Personality traits and academic examination performance. European Journal of Personality, 17(3), 237-250. doi:10.1002/per.473

Duckworth, A. L., & Seligman, M. E. P. (2005). Self-discipline outdoes IQ in predicting academic performance of adolescents. Psychological science, 16(12), 939-44. doi:10.1111/j.1467-9280.2005.01641.x

Furnham, A., & Chamorro-premuzic, T. (2005). Personality and Intelligence : Gender , the Big Five , Self-Estimated and Psychometric Intelligence. International Journal of Selection and Assessment, 13(1), 11-24.

Furnham, A., Rinaldelli-Tabaton, E. & Chamorro-Premuzic, T. (2011). Personality and Intelligence Predict Arts and Science School Results in 16 Year Olds. Psychologia, 54 (1), 39-51.

von Stumm, S., Hell B., & Chamorro-Premuzic T. (2011). The Hungry Mind. Perspectives on Psychological Science. 6(6), 574 - 588.

Shaping your cognitive environment for optimal cognition

Humans are the animals that manipulate their cognitive environment.

I reported recently on an intriguing study involving an African people, the Himba. The study found that the Himba, while displaying an admirable amount of focus (in a visual perception task) if they were living a traditional life, showed the same, more de-focused, distractible attention, once they moved to town. On the other hand, digit span (a measure of working memory capacity) was smaller in the traditional Himba than it was in the urbanized Himba.

This is fascinating, because working memory capacity has proved remarkably resistant to training. Yes, we can improve performance on specific tasks, but it has proven more difficult to improve the general, more fundamental, working memory capacity.

However, there have been two areas where more success has been found. One is the area of ADHD, where training has appeared to be more successful. The other is an area no one thinks of in this connection, because no one thinks of it in terms of training, but rather in terms of development — the increase in WMC with age. So, for example, average WMC increases from 4 chunks at age 4, to 5 at age 7, 6 at age 10, to 7 at age 16. It starts to decrease again in old age. (Readers familiar with my work will note that these numbers are higher than the numbers we now tend to quote for WMC — these numbers reflect the ‘magic number 7’, i.e. the number of chunks we can hold when we are given the opportunity to actively maintain them.)

Relatedly, there is the Flynn effect. The Flynn effect is ostensibly about IQ (specifically, the rise in average IQ over time), but IQ has a large WM component. Having said that, when you break IQ tests into their sub-components and look at their change over time, you find that the Digit Span subtest is one component that has made almost no gain since 1972.

But of course 1972 is still very modern! There is no doubt that there are severe constraints on how much WMC can increase, so it’s reasonable to assume we long since hit the ceiling (speaking of urbanized Western society as a group, not individuals).

It’s also reasonable to assume that WMC is affected by purely physiological factors involving connectivity, processing speed and white matter integrity — hence at least some of the age effect. But does it account for all of it?

What the Himba study suggests (and I do acknowledge that we need more and extended studies before taking these results as gospel), is that urbanization provides an environment that encourages us to use our working memory to its capacity. Urbanization provides a cognitively challenging environment. Our focus is diffused for that same reason — new information is the norm, rather than the exception; we cannot focus on one bit unless it is of such threat or interest that it justifies the risk.

ADHD shows us, perhaps, what can happen when this process is taken to the extreme. So we might take these three groups (traditional Himba, urbanized Himba, individuals with ADHD) as points on the same continuum. The continuum reflects degree of focus, and the groups reflect environmental effects. This is not to say that there are not physiological factors predisposing some individuals to react in such a way to the environment! But the putative effects of training on ADHD individuals points, surely, to the influence of the environment.

Age provides an intriguing paradox, because as we get older, two things tend to happen: we have a much wider knowledge base, meaning that less information is new, and we usually shrink our environment, meaning again that less information is new. All things being equal, you would think that would mean our focus could afford to draw in. However, as my attentive readers will know, declining cognitive capacity in old age is marked by increasing difficulties in ignoring distraction. In other words, it’s the urbanization effect writ larger.

How to account for this paradox?

Perhaps it simply reflects the fact that the modern environment is so cognitively demanding that these factors aren’t sufficient on their own to enable us to relax our alertness and tighten our focus, in the face of the slowdown in processing speed that typically occurs with age (there’s some evidence that it is this slowdown that makes it harder for older adults to suppress distracting information). Perhaps the problem is not simply, or even principally, the complexity of our environment, but the speed of it. You only have to compare a modern TV drama or sit-com with one from the 70s to see how much faster everything now moves!

I do wonder if, in a less cognitively demanding environment, say, a traditional Himba village, whether WMC shows the same early rise and late decline. In an environment where change is uncommon, it is natural for elders to be respected for their accumulated wisdom — experience is all — but perhaps this respect also reflects a constancy in WMC (and thus ‘intelligence’), so that elders are not disadvantaged in the way they may be in our society. Just a thought.

Here’s another thought: it’s always seemed to me (this is not in any way a research-based conclusion!) that musicians and composers, and writers and professors, often age very well. I’ve assumed this was because they are keeping mentally active, and certainly that must be part of it. But perhaps there’s another reason, possibly even a more important reason: these are areas of expertise where the proponent spends a good deal of time focused on one thing. Rather than allowing their attention to be diffused throughout the environment all the time, they deliberately shut off their awareness of the environment to concentrate on their music, their writing, their art.

Perhaps, indeed, this is the shared factor behind which activities help fight age-related cognitive decline, and which don’t.

I began by saying that humans are the animals that manipulate their cognitive environment. I think this is the key to fighting age-related cognitive decline, or ADHD if it comes to that. We need to be aware how much our brains try to operate in a way that is optimal for our environment — meaning that, by controlling our environment, we can change the way our brain operates.

If you are worried about your ‘scattiness’, or if you want to prevent or fight age-related cognitive decline, I suggest you find an activity that truly absorbs and challenges you, and engage in it regularly.

The increase in WMC in Himba who moved to town also suggests something else. Perhaps the reason that WM training programs have had such little success is because they are ‘programs’. What you do in a specific environment (the bounds of a computer and the program running on it) does not necessarily, or even usually, transfer to the wider environment. We are contextual creatures, used to behaving in different ways with different people and in different places. If we want to improve our WMC, we need to incorporate experiences that challenge and extend it into our daily life.

This, of course, emphasizes my previous advice: find something that absorbs you, something that becomes part of your life, not something you 'do' for an hour some days. Learn to look at the world in a different way, through music or art or another language or a passion (Civil War history; Caribbean stamps; whatever).

You can either let your cognitive environment shape you, or shape your cognitive environment.

Do you agree? What's your cognitive environment, and do you think it has affected your cognitive well-being?

Benefits from fixed quiet points in the day

On my walk today, I listened to a downloaded interview from the On Being website. The interview was with ‘vocal magician and conductor’ Bobby McFerrin, and something he said early on in the interview really caught my attention.

In response to a question about why he’d once (in his teens) contemplated joining a monastic order, he said that the quiet really appealed to him, and also ‘the discipline of the hours … there’s a rhythm to the day. I liked the fact that you stopped whatever you were doing at a particular time and you reminded yourself, you brought yourself back to your calling’.

Those words resonated with me, and they made me think of the Moslem habit of prayer. Of the idea of having specified times during the day when you stop your ‘ordinary’ life, and touch base, as it were, with something that is central to your being.

I don’t think you need to be a monk or a Moslem to find value in such an activity! Nor does the activity need to be overtly religious.

Because this idea struck another echo in me — some time ago I wrote a brief report on how even a short ‘quiet time’ can help you consolidate your memories. It strikes me that developing the habit of having fixed points in the day when (if at all possible) you engage in some regular activity that helps relax you and center your thoughts, would help maintain your focus during the day, and give you a mental space in which to consolidate any new information that has come your way.

Appropriate activities could include:

  • meditating on your breath;
  • performing a t’ai chi routine;
  • observing nature;
  • listening to certain types of music;
  • singing/chanting some song/verse (e.g., the Psalms; the Iliad; the Tao te Ching)

Regarding the last two suggestions, as I reported in my book on mnemonics, there’s some evidence that reciting the Iliad has physiological effects on synchronizing heartbeat and breath that is beneficial for both mood and cognitive functioning. It’s speculated that the critical factor might be the hexametric pace (dum-diddy, dum-diddy, dum-diddy, dum-diddy, dum-diddy, dum-dum). Dactylic hexameter, the rhythm of classical epic, has a musical counterpart: 6/8 time.

Similarly, another small study found that singing Ave Maria in Latin, or chanting a yoga mantra, likewise affects brain blood flow, and the crucial factor appeared to be a rhythm that involved breathing at the rate of six breaths a minute.

Something to think about!

How working memory works: What you need to know

A New Yorker cartoon has a man telling his glum wife, “Of course I care about how you imagined I thought you perceived I wanted you to feel.” There are a number of reasons you might find that funny, but the point here is that it is very difficult to follow all the layers. This is a sentence in which mental attributions are made to the 6th level, and this is just about impossible for us to follow without writing it down and/or breaking it down into chunks.

According to one study, while we can comfortably follow a long sequence of events (A causes B, which leads to C, thus producing D, and so on), we can only comfortably follow four levels of intentionality (A believes that B thinks C wants D). At the 5th level (A wants B to believe that C thinks that D wants E), error rates rose sharply to nearly 60% (compared to 5-10% for all levels below that).

Why do we have so much trouble following these nested events, as opposed to a causal chain?

Let’s talk about working memory.

Working memory (WM) has evolved over the years from a straightforward “short-term memory store” to the core of human thought. It’s become the answer to almost everything, invoked for everything related to reasoning, decision-making, and planning. And of course, it’s the first and last port of call for all things memory — to get stored in long-term memory an item first has to pass through WM, where it’s encoded; when we retrieve an item from memory, it again passes through WM, where the code is unpacked.

So, whether or not the idea of working memory has been over-worked, there is no doubt at all that it is utterly crucial for cognition.

Working memory has also been equated with attentional control, and working memory and attention are often used almost interchangeably. And working memory capacity (WMC) varies among individuals. Those with a higher WMC have an obvious advantage in reasoning, comprehension, remembering. No surprise then that WMC correlates highly with fluid intelligence.

So let’s talk about working memory capacity.

The idea that working memory can hold 7 (+/-2) items has passed into popular culture (the “magic number 7”). More recent research, however, has circled around the number 4 (+/-1). Not only that, but a number of studies suggest that in fact the true number of items we can attend to is only one. What’s the answer? (And where does it leave our high- and low-capacity individuals? There’s not a lot of room to vary there.)

Well, in one sense, 7 is still fine — that’s the practical sense. Seven items (5-9) is about what you can hold if you can rehearse them. So those who are better able to rehearse and chunk will have a higher working memory capacity (WMC). That will be affected by processing speed, among other factors.

But there is a very large body of evidence now pointing to working memory holding only four items, and a number of studies indicating that most likely we can only pay attention to one of these items at a time. So you can envision this either as a focus of attention, which can only hold one item, and a slightly larger “outer store” or area of “direct access” which can hold another three, or as a mental space holding four items of which only one can be the focus at any one time.

A further tier, which may be part of working memory or part of long-term memory, probably holds a number of items “passively”. That is, these are items you’ve put on the back burner; you don’t need them right at the moment, but you don’t want them to go too far either. (See my recent news item for more on all this.)

At present, we don’t have any idea how many items can be in this slightly higher state of activation. However, the “magic number 7” suggests that you can circulate 3 (+/-1) items from the backburner into your mental space. In this regard, it’s interesting to note that, in the case of verbal material, the amount you can hold in working memory with rehearsal has been found to more accurately equate to 2 seconds, rather than 7 items. That is, you can remember as much as you can verbalize in about 2s (so, yes, fast speakers have a distinct advantage over slower ones). You see why processing speed affects WMC.

Whether you think of WM as a focus of one and an outer store of 3, or as a direct access area with 4 boxes and a spotlight shining on one, it’s a mental space or blackboard where you can do your working out. Thinking of it this way makes it easier to conceptualize and talk about, but these items are probably not going into a special area as such. The thought now is that these items stay in long-term memory (in their relevant areas of association cortex), but they are (a) highly activated, and (b) connected to the boxes in the direct access area (which is possibly in the medial temporal lobe). This connection is vitally important, as we shall see.

Now four may not seem like much, but WM is not quite as limited as it seems, because we have different systems for verbal (includes numerical) and visuospatial information. Moreover, we can probably distinguish between the items and the processing of them, which equates to a distinction between declarative and procedural memory. So that gives us three working memory areas: verbal declarative; visuospatial declarative; procedural.

Now all of this may seem more than you needed to know, but breaking down the working memory system helps us discover two things of practical interest. First, which particular parts of the system are the parts that make a task more difficult. Second, where individual differences come from, and whether they are in aspects that are trainable.

For example, this picture of a mental space with a focus of one and a maximum of three eager-beavers waiting their turn, points to an important aspect of the working memory system: switching the focus. Experiments reveal that there is a large focus-switching cost, incurred whenever you have to switch the item in the spotlight. And the extent of this cost has been surprising — around 240ms in one study, which is about six times the length of time it takes to scan an item in a traditional memory-search paradigm.

But focus-switch costs aren’t a constant. They vary considerably depending on the difficulty of the task, and they also tend to increase with each item in the direct-access area. Indeed, just having one item in the space outside the focus causes a significant loss of efficiency in processing the focused item.

This may reflect increased difficulty in discriminating one highly activated item from other highly activated items. This brings us to competition, which, in its related aspects of interference and inhibition, is a factor probably more crucial to WMC than whether you have 3 or 4 or 5 boxes in your direct access area.

But before we discuss that, we need to look at another important aspect of working memory: updating. Updating is closely related to focus-switching, and it’s easy to get confused between them. But it’s been said that working memory updating (WMU) is the only executive function that correlates with fluid intelligence, and updating deficits have been suggested as the reason for poor comprehension (also correlated with low-WMC). So it’s worth spending a little time on.

To get the distinction clear in your mind, imagine the four boxes and the spotlight shining on one. Any time you shift the spotlight, you incur a focus-switching cost. If you don’t have to switch focus, if you simply need to update the contents of the box you’re already focusing on, then there will be an update cost, but no focus-switching cost.

Updating involves three components: retrieval; transformation; substitution. Retrieval simply involves retrieving the contents from the box. Substitution involves replacing the contents with something different. Transformation involves an operation on the contents of the box to get a new value (eg, when you have to add a certain number to an earlier number).

Clearly the difficulty in updating working memory will depend on which of these components is involved. So which of these processes is most important?

In terms of performance, the most important component is transformation. While all three components contribute to the accuracy of updating, retrieval apparently doesn’t contribute to speed of updating. For both accuracy and speed, substitution is less important than transformation.

This makes complete sense: obviously having to perform an operation on the content is going to be more difficult and time-consuming than simply replacing it. But it does help us see that the most important factor in determining the difficulty of an updating task will be the complexity of the transformation.

The finding that retrieval doesn’t affect speed of updating sounds odd, until you realize the nature of the task used to measure these components. The number of items was held constant (always three), and the focus switched from one box to another on every occasion, so focus-switching costs were constant too. What the finding says is that once you’ve shifted your focus, retrieval takes no time at all — the spotlight is shining and there the answer is. In other words, there really is no distinction between the box and its contents when the spotlight is on it — you don’t need to open the box.

However, retrieval does affect accuracy, and this implies that something is degrading or interfering in some way with the contents of the boxes. Which takes us back to the problems of competition / interference.

But before we get to that, let’s look at this issue of individual differences, because like WMC, working memory updating correlates with fluid intelligence. Is this just a reflection of WMC?

Differences in transformation accuracy correlated significantly with WMC, as did differences in retrieval accuracy. Substitution accuracy didn’t vary enough to have measurable differences. Neither transformation nor substitution speed differences correlated with WMC. This implies that the reason why people with high WMC also do better at WMU tasks is because of the transformation and retrieval components.

So what about the factors that aren’t correlated with WMC? The variance in transformation speed is argued to primarily reflect general processing speed. But what’s going on in substitution that isn’t going on in when WMC is measured?

Substitution involves two processes: removing the old contents of the box, and adding new content. In terms of the model we’ve been using, we can think of unbinding the old contents from the box, and binding new contents to it (remember that the item in the box is still in its usual place in the association cortex; it’s “in” working memory by virtue of the temporary link connecting it to the box). Or we can think of it as deleting and encoding.

Consistent with substitution not correlating with WMC, there is some evidence that high- and low-WMC individuals are equally good at encoding. Where high- and low-WMC individuals differ is in their ability to prevent irrelevant information being encoded with the item. Which brings me to my definition of intelligence (from 30 years ago — these ideas hadn’t even been invented yet. So I came at it from quite a different angle): the ability to (quickly) select what’s important.

So why do low-WMC people tend to be poorer at leaving out irrelevant information?

Well, that’s the $64,000 question, but related to that it’s been suggested that those with low working memory capacity are less able to resist capture by distracting stimuli than those with high WMC. A new study, however, provides evidence that low- and high-WMC individuals are equally easily captured by distracters. What distinguishes the two groups is the ability to disengage. High-capacity people are faster in putting aside irrelevant stimuli. They’re faster at deleting. And this, it seems, is unrelated to WMC.

This is supported by another recent finding, that when interrupted, older adults find it difficult to disengage their brain from the new task and restore the original task.

So what’s the problem with deleting / removing / putting aside items in focus? This is about inhibition, which takes us once again to competition / interference.

Now interference occurs at many different levels: during encoding, retrieval, and storage; with items, with tasks, with responses. Competition is ubiquitous in our brain.

In the case of substitution during working memory updating, it’s been argued that the contents of the box are not simply removed and replaced, but instead gradually over-written by the new contents. This fits in with a view of items as assemblies of lower-level “feature-units”. Clearly, items may share some of these units with other items (reflected in their similarity), and clearly the more they compete for these units, the greater interference there will be between the units.

You can see why it’s better to keep your codes (items) “lean and mean”, free of any irrelevant information.

Indeed, some theorists completely discard the idea of number of items as a measure of WMC, and talk instead in terms of “noise”, with processing capacity being limited by such factors as item complexity and similarity. While there seems little justification for discarding our “4+/-1”, which is much more easily quantified, this idea does help us get to grips with the concept of an “item”.

What is an item? Is it “red”? “red cow”? “red cow with blue ribbons round her neck”? “red cow with blue ribbons and the name Isabel painted on her side”? You see the problem.

An item is a fuzzy concept. We can’t say, “it’s a collection of 6 feature units” (or 4 or 14 or 42). So we have to go with a less defined description: it’s something so tightly bound that it is treated as a single unit.

Which means it’s not solely about the item. It’s also about you, and what you know, and how well you know it, and what you’re interested in.

To return to our cases of difficulty in disengaging, perhaps the problem lies in the codes being formed. If your codes aren’t tightly bound, then they’re going to start to degrade, losing some of their information, losing some of their distinctiveness. This is going to make them harder to re-instate, and it’s going to make them less distinguishable from other items.

Why should this affect disengagement?

Remember what I said about substitution being a gradual process of over-writing? What happens when your previous focus and new focus have become muddled?

This also takes us to the idea of “binding strength” — how well you can maintain the bindings between the contents and their boxes, and how well you can minimize the interference between them (which relates to how well the items are bound together). Maybe the problem with both disengagement and reinstatement has to do with poorly bound items. Indeed, it’s been suggested that the main limiting factor on WMC is in fact binding strength.

Moreover, if people vary in their ability to craft good codes, if people vary in their ability to discard the irrelevant and select the pertinent, to bind the various features together, then the “size” (the information content) of an item will vary too. And maybe this is what is behind the variation in “4 +/-1”, and experiments which suggest that sometimes the focus can be increased to 2 items. Maybe some people can hold more information in working memory because they get more information into their items.

So where does this leave us?

Let’s go back to our New Yorker cartoon. The difference between a chain of events and the nested attributions is that chaining doesn’t need to be arranged in your mental space because you don’t need to keep all the predecessors in mind to understand it. On the other hand, the nested attributions can’t be understood separately or even in partitioned groups — they must all be arranged in a mental space so we can see the structure.

We can see now that “A believes that B thinks C wants D” is easy to understand because we have four boxes in which to put these items and arrange them. But our longer nesting, “A wants B to believe that C thinks that D wants E”, is difficult because it contains one more item than we have boxes. No surprise there was a dramatic drop-off in understanding.

So given that you have to fill your mental space, what is it that makes some tasks more difficult than others?

  • The complexity and similarity of the items (making it harder to select the relevant information and bind it all together).
  • The complexity of the operations you need to perform on each item (the longer the processing, the more tweaking you have to do to your item, and the more time and opportunity for interference to degrade the signal).
  • Changing the focus (remember our high focus-switching costs).

But in our 5th level nested statement, the error rate was 60%, not 100%, meaning a number of people managed to grasp it. So what’s their secret? What is it that makes some people better than others at these tasks?

They could have 5 boxes (making them high-WMC). They could have sufficient processing speed and binding strength to unitize two items into one chunk. Or they could have the strategic knowledge to enable them to use the other WM system (transforming verbal data into visuospatial). All these are possible answers.


This has been a very long post, but I hope some of you have struggled through it. Working memory is the heart of intelligence, the essence of attention, and the doorway to memory. It is utterly critical, and cognitive science is still trying to come to grips with it. But we’ve come a very long way, and I think we now have sufficient theoretical understanding to develop a model that’s useful for anyone wanting to understand how we think and remember, and how they can improve their skills.

There is, of course, far more that could be said about working memory (I’ve glossed over any number of points in an effort to say something useful in less than 50,000 words!), and I’m planning to write a short book on working memory, its place in so many educational and day-to-day tasks, and what we can do to improve our skills. But I hope some of you have found this enlightening.

References

Clapp, W. C., Rubens, M. T., Sabharwal, J., & Gazzaley, A. (2011). Deficit in switching between functional brain networks underlies the impact of multitasking on working memory in older adults. Proceedings of the National Academy of Sciences. doi:10.1073/pnas.1015297108

Ecker, U. K. H., Lewandowsky, S., Oberauer, Klaus, & Chee, A. E. H. (2010). The Components of Working Memory Updating : An Experimental Decomposition and Individual Differences. Cognition, 36(1), 170 -189. doi: 10.1037/a0017891.

Fukuda, K., & Vogel, E. K. (2011). Individual Differences in Recovery Time From Attentional Capture. Psychological Science, 22(3), 361 -368. doi:10.1177/0956797611398493

Jonides, J., Lewis, R. L., Nee, D. E., Lustig, C. a, Berman, M. G., & Moore, K. S. (2008). The mind and brain of short-term memory. Annual review of psychology, 59, 193-224. doi: 10.1146/annurev.psych.59.103006.093615.

Kinderman, P., Dunbar, R.I.M. & Bentall, R.P. (1998).Theory-of-mind deficits and causal attributions. British Journal of Psychology 89: 191-204.

Lange, E. B., & Verhaeghen, P. (in press). No age differences in complex memory search: Older adults search as efficiently as younger adults. Psychology and Aging.

Oberauer, K, Sus, H., Schulze, R., Wilhelm, O., & Wittmann, W. (2000). Working memory capacity — facets of a cognitive ability construct. Personality and Individual Differences, 29(6), 1017-1045. doi: 10.1016/S0191-8869(99)00251-2.

Oberauer, K. (2005). Control of the Contents of Working Memory--A Comparison of Two Paradigms and Two Age Groups. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31(4), 714-728. doi:10.1037/0278-7393.31.4.714

Oberauer, Klaus. (2006). Is the Focus of Attention in Working Memory Expanded Through Practice ? Cognition, 32(2), 197-214. doi: 10.1037/0278-7393.32.2.197.

Oberauer, Klaus. (2009). Design for a Working Memory. Psychology of Learning and Motivation, 51, 45-100.

Verhaeghen, P., Cerella, J. & Basak, C. (2004) A Working Memory Workout : How to Expand the Focus of Serial Attention From One to Four Items in 10 Hours or Less. Cognition, 30 (6), 1322-1337.

Choosing when to think fast & when to think slow

I recently read an interesting article in the Smithsonian about procrastination and why it’s good for you. Frank Partnoy, author of a new book on the subject, pointed out that procrastination only began to be regarded as a bad thing by the Puritans — earlier (among the Greeks and Romans, for example), it was regarded more as a sign of wisdom.

The examples given about the perils of deciding too quickly made me think about the assumed connection between intelligence and processing speed. We equate intelligence with quick thinking, and time to get the correct answer is part of many tests. So, regardless of the excellence of a person’s cognitive product, the time it takes for them to produce it is vital (in test).

Similarly, one of the main aspects of cognition impacted by age is processing speed, and one of the principal reasons for people to feel that they are ‘losing it’ is because their thinking is becoming noticeably slower.

But here’s the question: does it matter?

Certainly in a life-or-death, climb-the-tree-fast-or-be-eaten scenario, speed is critical. But in today’s world, the major reason for emphasizing speed is the pace of life. Too much to do and not enough time to do it in. So, naturally, we want to do everything fast.

There is certainly a place for thinking fast. I recently looked through a short book entitled “Speed Thinking” by Ken Huds. The author’s strategy for speed thinking was basically to give yourself a very brief window — 2 minutes — in which to come up with 9 thoughts (the nature of those thoughts depends on the task before you — I’m just generalizing the strategy here). The essential elements are the tight time limit and the lack of a content limit — to accomplish this feat of 9 relevant thoughts in 2 minutes, you need to lose your inner censor and accept any idea that occurs to you.

If you’ve been reading my last couple of posts on flow, it won’t surprise you that this strategy is one likely to produce that state of consciousness (at least, once you’re in the way of it).

So, I certainly think there’s a place for fast thinking. Short bouts like this can re-energize you and direct your focus. But life is a marathon, not a sprint, and of course we can’t maintain such a pace or level of concentration. Nor should we want to, because sometimes it’s better to let things simmer. But how do we decide when it’s best to think fast or best to think slow? (shades of Daniel Kahneman’s wonderful book Thinking, Fast and Slow here!)

In the same way that achieving flow depends on the match between your skill and the task demands, the best speed for processing depends on your level of expertise, the demands of the task, and the demands of the situation.

For example, Sian Beilock (whose work on math anxiety I have reported on) led a study that demonstrated that, while novice golfers putted better when they could concentrate step-by-step on the accuracy of their performance, experts did better when their attention was split between two tasks and when they were focused on speed rather than accuracy.

Another example comes from a monkey study that has just been in the news. In this study, rhesus macaques were trained to reach out to a target. To do so, their brains needed to know three things: where their hand is, where the target is, and the path for the hand to travel to reach the target. If there’s a direct path from the hand to the target, the calculation is simple. But in the experiment, an obstacle would often block the direct path to the target. In such cases, the calculation becomes a little bit more complicated.

And now we come to the interesting bit: two monkeys participated. As it turns out, one was hyperactive, the other more controlled. The hyperactive monkey would quickly reach out as soon as the target appeared, without waiting to see if an obstacle blocked the direct path. If an obstacle did indeed appear in the path (which it did on 2/3 trials), he had to correct his movement in mid-reach. The more self-controlled monkey, however, waited a little longer, to see where the obstacle appeared, then moved smoothly to the target. The hyperactive monkey had a speed advantage when the way was clear, but the other monkey had the advantage when the target was blocked.

So perhaps we should start thinking of processing speed as a personality, rather than cognitive, variable!

[An aside: it’s worth noting that the discovery that the two monkeys had different strategies, undergirded by different neural activity, only came about because the researcher was baffled by the inconsistencies in the data he was analyzing. As I’ve said before, our focus on group data often conceals many fascinating individual differences.]

The Beilock study indicates that the ‘correct’ speed — for thinking, for decision-making, for solving problems, for creating — will vary as a function of expertise and attentional demands (are you trying to do two things at once? Is something in your environment or your own thoughts distracting you?). In which regard, I want to mention another article I recently read — a blog post on EdWeek, on procedural fluency in math learning. That post referenced an article on timed tests and math anxiety (which I’m afraid is only available if you’re registered on the EdWeek site). This article makes the excellent point that timed tests are a major factor in developing math anxiety in young children. Which is a point I think we can generalize.

Thinking fast, for short periods of time, can produce effective results, and the rewarding mental state of flow. Being forced to try and think fast, when you lack the necessary skills, is stressful and non-productive. If you want to practice thinking fast, stick with skills or topics that you know well. If you want to think fast in areas in which you lack sufficient expertise, work on slowly and steadily building up that expertise first.

Taking things too seriously

I was listening to a podcast the other day. Two psychologists (Andrew Wilson and Sabrina Galonka) were being interviewed about embodied cognition, a topic I find particularly interesting. As an example of what they meant by embodied cognition (something rather more specific than the fun and quirky little studies that are so popular nowadays — e.g., making smaller estimations of quantities when leaning to the left; squeezing a soft ball making it more likely that people will see gender neutral faces as female while squeezing a hard ball influences them to see the faces as male; holding a heavier clipboard making people more likely to judge currencies as more valuable and their opinions and leaders as more important), they mentioned the outfielder problem. Without getting into the details (if you’re interested, the psychologists have written a good article on it on their blog), here’s what I took away from the discussion:

We used to think that, in order to catch a ball, our brain was doing all these complex math- and physics-related calculations — try programming a robot to do this, and you’ll see just how complex the calculations need to be! And of course this is that much more complicated when the ball isn’t aimed at you and is traveling some distance (the outfielder problem).

Now we realize it’s not that complicated — our outfielder is moving, and this is the crucial point. Apparently (according to my understanding), if he moves at the right speed to make his perception of the ball’s speed uniform (the ball decelerates as it goes up, and accelerates as it comes down, so the catcher does the inverse: running faster as the ball rises and slower as it falls), then — if he times it just right — the ball will appear to be traveling a straight line, and the mental calculation of where it will be is simple.

(This, by the way, is what these psychologists regard as ‘true’ embodied cognition — cognition that is the product of a system that includes the body and the environment as well as the brain.)

This idea suggests two important concepts that are relevant to those wishing to improve their memory:

We (like all animals) have been shaped by evolution to follow the doctrine of least effort. Mental processing doesn’t come cheap! If we can offload some of the work to other parts of the system, then it’s sensible to do so.

In other words, there’s no great moral virtue in insisting on doing everything mentally. Back in the day (2,500 odd years ago), it was said that writing things down would cause people to lose their ability to remember (in Plato’s Phaedrus, Socrates has the Egyptian god-pharaoh say to Thoth, the god who invented writing, “this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.”)

This idea has lingered. Many people believe that writing reminders to oneself, or using technology to remember for us, ‘rots our brains’ and makes us incapable of remembering for ourselves.

But here’s the thing: the world is full of information. And it is of varying quality and importance. You might feel that someone should be remembering certain information ‘for themselves’, but this is a value judgment, not (as you might believe) a helpful warning that their brain is in danger of atrophying itself into terminal dysfunction. The fact is, we all choose what to remember and what to forget — we just might not have made a deliberate and conscious choice. Improving your memory begins with this: actually thinking about what you want to remember, and practicing the strategies that will help you do just that.

However, there’s an exception to the doctrine of least effort, and it’s evident among all the animals with sufficient cognitive power — fun. All of us who have enough brain power to spare, engage in play. Play, we are told, has a serious purpose. Young animals play to learn about the world and their own capabilities. It’s a form, you might say, of trial-&-error — but a form with enjoyability built into the system. This enjoyability is vital, because it motivates the organism to persist. And persistence is how we discover what works, and how we get the practice to do it well.

What distinguishes a good outfielder from someone who’s never tried to catch a ball before? Practice. To judge the timing, to get the movement just right — movement which will vary with every ball — you need a lot of practice. You can’t just read about what to do. And that’s true of every physical skill. Less obviously, it’s true of cognitive skills also.

It also ties back to what I was saying about trying to achieve flow. If you’re not enjoying what you’re doing, it’s probably either too easy or too hard for you. If it’s too easy, try and introduce some challenge into it. If it’s too hard, break it down into simpler components and practice them until you have achieved a higher level of competence on them.

Enjoyability is vital for learning well. So don’t knock fun. Don’t think play is morally inferior. Instead, try and incorporate a playful element into your work and study (there’s a balance, obviously!). If you have hobbies you enjoy, think about elements you can carry across to other activities (if you don’t have a hobby you enjoy, perhaps you should start by finding one!).

So the message for today is: the holy grail in memory and learning is NOT to remember everything; the superior approach to work / study / life is NOT total mastery and serious dedication. An effective memory is one that remembers what you want/need it to remember. Learning occurs through failure. Enjoyability greases the path to the best learning and the most effective activity.

Let focused fun be your mantra.

Memory is complicated

Recently a “Framework for Success in Postsecondary Writing” came out in the U.S. This framework talked about the importance of inculcating certain “habits of mind” in students. One of these eight habits was metacognition, which they defined as the ability to reflect on one’s own thinking as well as on the individual and cultural processes used to structure knowledge.

The importance of metamemory was emphasized in two recent news items I posted, both dealing with encoding fluency, and the way in which many of us use it to judge how well we’ve learned something, or how likely we are to remember something. The basic point is that we commonly use a fluency heuristic (“it was easy to read/process, therefore it will be easily remembered”) to guide our learning, and yet that is often completely irrelevant.

BUT, not always irrelevant.

In the study discussed in Fluency heuristic is not everyone’s rule, people who believed intelligence is malleable did not use the fluency heuristic. And in one situation this was absolutely the right thing to do, and in the other situation, not so much. Because in that situation, what made the information easy to process did in fact also make it easier to remember.

The point is not that the fluency heuristic is wrong. Nor that it is right. The point is that heuristics (“rules of thumb”) are general guidelines, useful as quick and dirty ways of dealing with things you lack the expertise to deal with better. Heuristics are useful, but they are most useful when you have the knowledge to know when to apply them. The problem is not the use of this heuristic; it is the inflexible use of this heuristic.

Way back, more than ten years ago, I wrote a book called The Memory Key, and in it I said: “The more you understand about how memory works, the more likely you are to benefit from instruction in particular memory skills.” That’s what my books are all about, and that’s what this website is all about.

Learning a “rule” is easy; learning to tell when it’s appropriate to apply it is quite another. My approach to teaching memory strategies is far more complex than the usual descriptions, because learning how to perform a strategy is not particularly helpful on its own. But the reason most memory-improvement books/courses don’t try to do what I do is because explaining how it all works — how memory works, how the strategy works, how it all fits together — is a big task.

But the fact is, learning is a complicated matter. Oh, humans are, truly, great learners. We really do have an amazing memory, when you consider all the things we manage to stuff in there, usually without any great effort or particular intention. But that’s the point, isn’t it? It isn’t about how much you remember. It’s about remembering the things we want to remember.

And to do that, we need to know what makes things hard to remember, or easy to remember. We need to know that this is a question about the things themselves, about the context they’re in, about the way you’re experiencing them, and about the way you relate to them. You can see why this is something that can’t simply be written down in a series of bullet points.

But you don’t have to become a cognitive psychologist either! Expertise comes at different levels. My aim, in my books in particular, and on this website, is to explain as much as is helpful, leaving out most of the minutiae of neuroscience and cognitive theory, trying to find the kernel that is useful at a practical level.

It’s past time I put all these bits together, to describe, for example, exactly when a good mood helps cognition, and when it impairs it; when shifting your focus of attention impairs your performance, and when you need to shift focus to revive your performance; when talking helps, and when it doesn’t; when gesturing helps, and when it doesn’t — you see, there are no hard-and-fast rules about anything. Everything is tempered by task, by circumstance, by individual. So, I will be working on that: the manual for advanced users, you might call it. Let me know if this is something you’d be interested in (the more interest, the more time I’ll spend on it!).