At the same time as a group of French parents and teachers have called for a two-week boycott of homework (despite the fact that homework is officially banned in French primary schools), and just after the British government scrapped homework guidelines, a large long-running British study came out in support of homework.
The study has followed some 3000 children from preschool through (so far) to age 14 (a subset of around 300 children didn’t attend preschool but were picked up when they started school). The latest report from the Effective Pre-school, Primary and Secondary Education Project (EPPSE), which has a much more complete database to call on than previous studies, has concluded that, for those aged 11-14, time spent on homework was a strong predictor of academic achievement (in three core subjects).
While any time spent on homework was helpful, the strongest effects were seen in those doing homework for 2-3 hours daily. This remained true even after prior self-regulation was taken into account.
Of course, even with such a database as this, it is difficult to disentangle other positive factors that are likely to correlate with homework time — factors such as school policies, teacher expectations, parental expectations. Still, this study gives us a lot of data we can mull over and speculate about.
For example, somewhat depressingly, only a quarter of students (28%) said they were sometimes given individualized work, and many weren’t impressed by the time it took some teachers to mark and return their homework (only 68% of girls, and 75% of boys, agreed that ‘Most teachers mark and return my homework promptly’), or with the standards of the work required (49% of those whose family had no educational qualifications, 34% of those whose family had school or vocational qualifications, and 30% of those whose family had higher qualifications, agreed with the statement that ‘teachers are easily satisfied’ — suggesting among other things that teachers of less privileged students markedly underestimate their students’ abilities). Also depressingly, over a third (36%) agreed with the statement that ‘pupils who work hard are given a hard time by others’ (again, this breaks down into quite different proportions depending on the student’s background, with 46% of those in the lowest ‘Home Learning Environment’ agreeing with the statement, decreasing steadily through the ranks to finally reach 27% (still too high!) among those in the highest HLE).
One supposed benefit of homework that has been much touted, especially by those who are in the ‘homework for the sake of homework’ camp, is that of teaching self-regulation (although it can, and has, be equally argued that, by setting useless homework, teachers weaken self-regulation). While the present study did find social-behavioral benefits associated with homework, which would seem to support the former view, these benefits were only seen in relation to behavior at age 14, not to any changes between 11 and 14. In other words, homework wasn’t affecting change over time. This would seem to argue against the idea that doing homework teaches children how to manage their own learning.
Another interesting (of the many) key findings of the report concerns children who ‘succeed against the odds’ — that is, they do better than expected considering their socioeconomic or personal circumstances. Parents of these children tend to engage in ‘active cultivation’ — reading and talking to them when young, providing them with many and wide-ranging learning experiences throughout their childhood, supporting and encouraging their learning. Such support tended to be lacking for those children who did not transcend their circumstances, whose parents often felt helpless about parenting and about education.
In view of my last blog post, I would also like to particularly note that ‘good’ students tended to have a strong internal locus of control, while ‘poor’ students tended to feel helplessness, and had the belief that the ability to learn was an inborn talent (that they didn’t possess).
But education providers shouldn’t simply blame the parents! Teachers, too, are important, and those students who succeeded against the odds also attributed part of their success to supportive and empowering teachers, while those disadvantaged students who didn’t succeed mentioned the high number of supply teachers and disorganized lessons.
There is also a role for peers, and for extracurricular activities — families with academically successful children tended to value extracurricular activities, while those with less successful students viewed them, dismissively, as ‘fun’, rather than of any educational value.
You can download the full report at https://www.education.gov.uk/publications/standard/publicationDetail/Page1/DFE-RR202 or see the summary at http://www.ioe.ac.uk/newsEvents/62517.html
There’s a lot of controversy about the value of homework, for understandable reasons. And the inconsistent findings of homework research point to the fact that we can’t say, simplistically, that all children of [whatever age] should do [so many] hours of homework. Because it rests on the quality and context of the homework, and the interaction with the individual. Homework may be an effective strategy, but it is one that is all too often carried out ineffectively.
Homework for the sake of homework is always a bad idea, and if the teacher can’t articulate what the purpose of the homework is (or that purpose isn’t a good one!), then they shouldn’t set it.
So what are good purposes for homework?
The most obvious is to perform tasks that can’t, for reasons of time or resources, be accomplished in the classroom. But this, of course, is less straightforward than it appears. Practice, for example, would seem to be a clear contender, but optimally distributed retrieval practice (i.e., testing — see also this news report and this) is usually best done in the classroom. Projects generally require time and resources beyond the classroom, but parts of the project may well require school resources or group activity or teacher feedback.
Maybe we should turn this question around: what are classrooms good for?
Contrary to popular practice, the simple regurgitation of information, from teacher to student, is not what classrooms are best used for. Such information is more efficiently absorbed from texts or videos or podcasts — which students can read/watch/listen to as often as they need to. No, there are five main activities for which classrooms are best suited:
- Group activities (including class discussion)
- Activities involving school resources (such as science experiments — I am using ‘classroom’ broadly)
- Praxis (as seen in the apprenticeship model — a skill or activity is modeled by a skilled practitioner for students to imitate; the practitioner provides feedback)
- Motivation (the teacher engages and enthuses the students; teacher and peer feedback provides on-going help to stay on-task)
- Testing (not to put students under pressure to perform on tests that will decide their future, but because retrieval practice is the best strategy for learning there is — that is, testing needs to be done in a completely different way, and with students and teachers understanding that these tests are for the purposes of learning, not as a judgment on ability)
All of this is why the flipped classroom model is becoming so popular. I’m a great fan of this, although of course it needs to be done well. Here’s some links for those who want to learn more about this:
An article on flipped classrooms, what they are and some teachers’ and students’ experiences. http://www.azcentral.com/news/articles/2012/03/31/20120331arizona-school-online-flipping.html
A case study of ‘flipped classroom’ use at Byron High School, where math mastery has jumped from 30% in 2006 to 74% in 2011 according to the Minnesota Comprehensive Assessments. http://thejournal.com/articles/2012/04/11/the-flipped-classroom.aspx
A brief interview with high school chemistry teacher Jonathan Bergmann, who now helps other teachers ‘flip’ their classrooms, and is co-author of a forthcoming book on the subject. http://www.washingtonpost.com/local/education/the-flip-classwork-at-home-homework-in-class/2012/04/15/gIQA1AajJT_story.html
But there's one reason for all the argument on the homework issue that doesn't get a lot of airtime, and that is that there is no clear consensus on what school is for and what students should be getting out of it. And maybe part of the reason for that is that, for some people (some teachers, some education providers and officials), they don’t want to articulate what they believe school is all about, because they know many people would be outraged by their opinions. But if you think some people are going to be appalled, maybe you should rethink your thoughts!
Now of course different individuals are going to want different things from education, but until all parties can front up and lay out clearly exactly what they think school is for, then we’re not going to be able to construct a system and a curriculum that teaches effectively and reliably across the board.
Which is not to say I think we'd all agree. But if people openly and honestly put their agenda on the table, then we could openly state what particular schools are for, and different guidelines and assessment tools could be used appropriately.
But first and and most important: everyone (students, teachers, and parents) needs to realize that, notwithstanding the role of genes, intelligence and learning ‘talents’ are far from fixed. ((I’ve talked about this on a number of occasions, but if you want to read more about this, and the importance of self-regulation, from another source, check out this blog post at Scientific American.) If a child is not learning, it is a failure of a number of aspects of their situation, but it is not (absent severe brain damage), because the child is too stupid or lazy. (On which subject, you might like to read a great article in the Guardian about 'Poor economics'.)
What I think about homework is that we should get away completely from this homework/classwork divide. What we need to do is decide what work the student needs to do (to fulfil the articulate purpose), and then divide that into work that is most effectively (given the student's circumstances) done in the classroom and work that is best done in the student's own time and at their own pace.
So what do you think?
Children learn. It’s what they do. And they build themselves over the years from wide-eyed baby to a person that walks and talks and can maybe fix your computer, so it’s no wonder that we have this idea that learning comes so much more easily to them than it does to us. But is it true?
There are two particular areas where children are said to excel: learning language, and learning skills.
Years ago I reported on a 2003 study that challenged the widespread view that young children learn language more easily than anyone older, in regard to vocabulary. Now a new study suggests that the idea doesn’t apply to grammar-learning either.
In the study, 24 Israeli students aged 8, 12, or 21, were given ten daily lessons in a made-up language. A rule in the language — not made explicit to the students — was that verbs were spelled and pronounced differently depending on whether they referred to an animate or inanimate object. In the lessons, the students were asked to listen to a list of correct noun-verb pairs, and then say the correct verb when given further nouns. Two months later, the students were tested on what they remembered.
The young adults were significantly faster at learning and more accurate than the other groups. Moreover, the 8-year-olds never succeeded in transferring the rule to new examples (even when they were given additional training, with the rule made more obvious), while most 12-year-olds and adults scored over 90%, with the adults doing best. It’s also noteworthy (given popular belief) that children's pronunciation was inferior to that of older subjects.
The findings point to the importance of explicit learning, as well as indicating that language skills are not reduced post-puberty, as has been suggested. So why does it seem more difficult for most adults to learn a new language? The problem may lie with interference from the native (or indeed any other) language.
I’ll get back to that. Let’s move on to the related question of procedural memory, or skill learning.
Here’s a study in which we learn something truly fascinating about interference. In the study, 74 young people (aged 9, 12, and 17) were trained on a finger-tapping task, then tested on the two following days. Some of the participants were further tested six weeks later. In a second experiment, 54 similarly-aged people had the same training, but also given an additional training session two hours later, during which the motor sequence to be learned was the reverse of that practiced in the initial session. They were then tested, 24 hours later, on the first sequence.
In the first experiment, all age-groups improved steadily during training, in both speed and accuracy, and showed jumps in performance when tested 24 hours later (such jumps are typical in procedural learning and are referred to as ‘off-line gains’; they are assumed to reflect memory consolidation).
These jumps were maintained or improved at 48 hours, and six weeks. The gains were the same for each age-group, but there was a clear difference between the groups in terms of their starting point, with the older ones performing noticeably better initially. Because the effect of practice was the same for all, the performance difference between each age-group was the same at each point in time.
It is worth emphasizing that performance six weeks after the experience was the same, and sometimes better, despite the lack of practice over that time.
So these results challenge the view that children have an advantage over adults in terms of learning skills, and also demonstrate that children improve “off-line” as adults do, indicating that they too have an effective consolidation phase in motor memory.
But the second experiment is the really interesting bit. You would expect, if you learned one sequence and then learned the reverse, that this would interfere badly with your memory for the first sequence. And so it did, for the 17-year-olds. But not for the 9- and 12-year-olds, who both showed a performance gain at 24 hours, as seen in the first experiment.
Moreover, the better the 17-year-olds became at the reverse sequence, the worse their performance on the initial sequence at the 24-hour test (as you’d expect) — but for the 12-year-olds, the better they were on the reverse sequence, the better they did on the first sequence at the 24-hour test.
What does this mean? Why didn’t interference occur in the pre-pubertal children?
It appears that the consolidation occurring in children is different in some way from that occurring in adults.
There are several possibilities. It may be that the consolidation process becomes, post-puberty, more selective. In the situation where there are several different experiences, priority is given to the more recent. It may also be that consolidation simply occurs faster in children.
One mechanism of change may occur through sleep. The structure of sleep changes during puberty, and we don’t yet know whether consolidation occurs during sleep in children as it does in adults. Another is competition for neural resources (transcription and protein synthesis related factors) during consolidation. It has been suggested that this “competitive maintenance” only fully matures at puberty.
On the other hand, it may have to do with the effects of experience. Interference only occurs when tasks overlap at some point. If children are representing the movement sequences in a more specific, less abstract, way than adults, the sequences may be less likely to use the same neurons (e.g. adults are learning a rule; children are learning two different ways of moving particular fingers). Accordingly, training on the reverse sequence provides additional training in the art of moving these fingers in this way, but doesn’t interfere because the pattern is not the same.
Interference is the bug-bear of learning. Interference may be the key to why learning gets harder the older we get — despite a number of advantages. So let’s explore this a little more.
Here’s a small study in which 14 young adults (average age 20) and 12 older adults (average age 58; range 55-70) learned a motor sequence task requiring them to press the appropriate button when they saw a blue dot appear in one of four positions on the screen. The training included several learnable sequences interspersed with random trials. Participants, however, were not informed of this. There were three blocks of trials during the first session (separated by a 1-2 minute rest), and a fourth block on the second session, 24 hours later.
As expected, younger adults were notably faster in their responses than the older group. Less expected was the fact that the older group showed markedly greater improvement on the learnable sequences than the younger group. However, on the second session, while the younger adults showed the expected off-line gain in performance, indicative of consolidation, the older adults performed at the same level as they had early in the first session.
It should be noted that the average reaction time of the older group in the very last session matched the reaction time of the younger group in the first sessions, demonstrating that, while we may slow down with age, we can counter that with training. The fact that the older adults were noticeably better at learning the sequences may reflect the increases in activation seen in motor regions in normal aging, possibly compensating for decreased activation and atrophy in the hippocampus.
But what’s interesting in this context is this lack of off-line gain.
The same thing was seen in another study comparing younger and older adults, which found that, while the older adults showed improvement in general skill on an implicit sequence-learning task after 12 hours, this improvement had disappeared at 24 hours. Nor was it seen at one week.
So why aren’t these memories being consolidated in the older adults?
(This is not to say that all benefit of the earlier training was lost — the improvement over the second session indicates that some memory was retained. So it may be — and is consistent with what we know about the effects of training in older adults — that more, and perhaps longer, training sessions are needed before older adults can properly consolidate new learning.)
Is this because we become slower to consolidate with age? This harks back to the idea that children suffer less interference because they can consolidate memories more swiftly.
Or perhaps it has to do with the greater interference attendant on the brains of older adults being more richly-connected. A computer model mimicked a decline in language learning as a function of the growth in connectivity in the neural network. This computational model suggests that once connectivity in the parts of the brain responsible for procedural memory slows, learning suffers increasingly from first-language interference.
It may be, of course, that both processes are going on. Greater interference, and slower consolidation.
It may also be that the adult brain becomes more selective in the making of long-term skill memory.
It may also be that these (and other) changes in the adult brain lead to more interaction between information-sets that are further apart (see my recent news item on preventing interference). Thus, if you learn something at ten in the morning, and something else at twelve, your brain can, and will, try to relate the two (which can be good or bad). A child’s brain can’t stretch to encompass that. They would need to be explicitly reminded of the first lesson.
I suspect that all these factors are important, and point to ways in which we should approach learning/teaching differently for pre-pubertal children, young adults, and older adults.
In the case of older adults, it is clear that we need to provide the optimal conditions for consolidation.
I have talked repeatedly about the value of spaced training, distributed training, interleaved training. So it’s interesting to note that studies have found that consolidation of motor memories occurs differently depending on whether training occurs in blocks (each sequence mastered before learning another one) or on a random schedule involving all sequences.
Off-line learning is better when motor skills are learned under a random practice schedule. While blocked practice produces better immediate learning, random practice produces better delayed learning. It appears that a random schedule generates activity across a broad network involving premotor, parietal, sensorimotor and subcortical regions, while learning under the blocked schedule is limited to a more confined area (specifically one particular part of the motor cortex).
This suggests that interleaved practice is even more important for older adults. Although it slows down initial learning (which, remember, was better for older adults compared to younger, so there’s leeway there!), spreading the load across a broader neural network is especially important for those who have some atrophy or impairment in specific regions (as often occurs with age).
Judicious resting during learning may also be of greater benefit for older adults. Consolidation occurs most famously during sleep (and let’s not forget how sleep changes in old age), and also occurs to a lesser level while awake, within a few hours of training. But there is also evidence that a boost in skill learning can occur after rests that only last a few minutes (or even seconds). This phenomenon is distinguished from consolidation (it’s called ‘reminiscence’), because the gains in performance don’t usually endure. However, while in some circumstances it may simply reflect recovery from mental or physical fatigue, in others it may have a more lasting effect.
Evidence for this has come from learning in music. A particularly interesting study involved non-musicians learning a five-key sequence on a digital piano. It found that even 5-minute rests during learning could be beneficial, but only if they occurred at the right time.
In the study, the participants repeated the sequence as fast and accurately as they could during twelve 30-second blocks interspersed with 30-s pauses. A third of the participants had a 5 minute rest between the third and fourth block, while another third had the rest between the ninth and tenth block, and the remaining third had no rest at all. Everyone was re-tested the next day, around 12 hours after training.
Participants showed large improvements during training after either 5-minute rest. However it was only those who were given a rest early in the training that continued to show improvement throughout the training. That is, even though the late-rest group matched their performance on block 10, after this ‘jump’ their performance fell on blocks 11 and 12, while the performance of the early-rest group continued to climb after their jump (at block 4). This group also showed the greatest off-line gain. That is, their performance ‘jumped’ more than that of the other two groups when tested on the following day.
In other words, consolidation was affected by the timing of the rest.
Among the late-rest and no-rest groups, improvement during blocks 4-9 was not as rapid as it had been during the first three blocks. This is a typical pattern during motor learning. It may be, then, that resting early allows processes triggered by repetition to develop fully, rather than becoming attenuated through too much repetition. Thus resting early in practice may allow the faster rate of learning to continue for longer. This in turn results in greater repetition before practice ends, leading to a more stabilized (short-term consolidated) memory, and thus greater overnight (long-term) consolidation.
On the other hand, the short-lasting gain achieved by the late-rest group didn’t affect later learning, but did predict the extent to which performance improved after sleep.
Other improvements to learning may come from reducing interference, and taking cognizance of greater selectivity. In the realm of language learning, for example, it’s argued that successful long-term learning in adults is more and more dependent on explicit learning, declarative knowledge, and its automatization. It may be that, for adults learning a second language, greater importance should be placed on explicit comparison with the native language.
It also seems likely that immersion in the new language is more important for adult learners. The problem is that every time you return to your native language, you’re encouraging interference (something to which, as we have seen, children may be far less susceptible).
In sum, as we get older, interference becomes more of an issue. To counter this, we need to be more thoughtful about planning our learning.
For more about the recently reported research into the difference between children's and adults' language learning, see
Brown, R. M., & Robertson E. M. (2007). Off-Line Processing: Reciprocal Interactions between Declarative and Procedural Memories. The Journal of Neuroscience. 27(39), 10468 - 10475.
Brown, R. M., Robertson E. M., & Press D. Z. (2009). Sequence Skill Acquisition and Off-Line Learning in Normal Aging. PLoS ONE. 4(8), e6683 - e6683.
Cash, C. D. (2009). Effects of Early and Late Rest Intervals on Performance and Overnight Consolidation of a Keyboard Sequence. Journal of Research in Music Education. 57(3), 252 - 266.
DeKeyser, R., Monner, D., Hwang, S-O, Morini, G. & Vatz, K. 2011. Qualitative differences in second language memory as a function of late learning. Presented at the International Congress for the Study of Child Language in Montreal, Canada.
Dorfberger, S., Adi-Japha E., & Karni A. (2007). Reduced Susceptibility to Interference in the Consolidation of Motor Memory before Adolescence. PLoS ONE. 2(2), e240 - e240.
Ferman, S., & Karni A. (2010). No Childhood Advantage in the Acquisition of Skill in Using an Artificial Language Rule. PLoS ONE. 5(10), e13648 - e13648.
Ferman, S. & Karni, A. 2011. Adults outperform children in acquiring a language skill: Evidence from learning an artificial morphological rule in different conditions. Presented at the International Congress for the Study of Child Language in Montreal, Canada.
Karni, A. 2011. A critical look at ‘critical periods’ in skill acquisition: from motor sequences to language skills. Presented at the International Congress for the Study of Child Language in Montreal, Canada.
Nemeth, D., & Janacsek K. (2010). The Dynamics of Implicit Skill Consolidation in Young and Elderly Adults. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences. 66B, 15 - 22.
Robertson, E. M., Press D. Z., & Pascual-Leone A. (2005). Off-Line Learning and the Primary Motor Cortex. The Journal of Neuroscience. 25(27), 6372 - 6378.
Stambaugh, L. A. (2011). When Repetition Isn’t the Best Practice Strategy: Effects of Blocked and Random Practice Schedules. Journal of Research in Music Education. 58(4), 368 - 383.
Steele, C. J., & Penhune V. B. (2010). Specific Increases within Global Decreases: A Functional Magnetic Resonance Imaging Investigation of Five Days of Motor Sequence Learning. The Journal of Neuroscience. 30(24), 8332 - 8341.
Wymbs, N. F., & Grafton S. T. (2009). Neural Substrates of Practice Structure That Support Future Off-Line Learning. Journal of Neurophysiology. 102(4), 2462 - 2476.
In October I reported on a study that found older adults did better than younger adults on a decision-making task that reflected real-world situations more closely than most tasks used in such studies. It was concluded that, while (as previous research has shown) younger adults may do better on simple decision-making tasks, older adults have the edge when it comes to more complex scenarios. Unsurprisingly, this is where experience tells.
Last year I reported on another study, showing that poorer decisions by older adults reflected specific attributes, rather than age per se. Specifically, processing speed and memory are behind individual differences in decision-making performance. Both of these processes, of course, often get worse with age.
What these two studies suggest is that your ability to make good decisions depends a lot on whether
- you have sufficient time to process the information you need,
- your working memory is up to the job of processing all the necessary information, and
- your long-term memory is able to provide any information you need from your own experience.
One particular problem for older adults, for example, that I have discussed on many occasions, is source memory — knowing the context in which you acquired the information. This can have serious consequences for decision-making, when something or someone is remembered positively when it should not, because the original negative context has been forgotten.
But the trick to dealing with memory problems is to find compensation strategies that play to your strengths. One thing that improves with age is emotion regulation. As we get older, most of us get better at controlling our emotions, and using them in ways that make us happier. Moreover, it appears that working memory for emotional information (in contrast to other types of information) is unaffected by age. Given new research suggesting that decision-making is not simply a product of analytic reasoning processes, but also involves an affective/experiential process that may operate in parallel and be of equal importance, the question arises: would older adults be better relying on emotion (their ‘gut’) for decisions?
In Scientific American I ran across a study looking into this question. 60 younger (aged 18-30) and 60 older adults (65-85) were presented with health care choices that required them to hold in mind and consider multiple pieces of information. The choices were among pairs of health-care plans, physicians, treatments, and homecare aides. Working memory load increased across trials from one to four attributes per option. On each trial, one option had a higher proportion of positive to negative attributes. Each attribute had a positive and negative variant (e.g., “dental care is fully covered” vs “dental care is not covered”).
In the emotion-focus condition participants were asked to focus on their emotional reactions to the options and report their feelings about the options before making a choice. In the information-focus condition, participants were told to focus instead on the specific attributes and report the details about the options. There were no such instructions in the control condition.
As expected, working memory load had a significant effect on performance, but what’s interesting is the different effects in the various conditions. In the control condition, for both age groups, there was a dramatic decrease in performance when the cognitive load increased from 2 items to 4, but no difference between those in which the load was 4, 6, or 8 items. In the information-focus condition, the younger group showed a linear (but not steep) decrease in decision-making performance with each increase in load, except at the last — there was no difference between 6 and 8 items. The older group showed a dramatic drop when load was increased from 2 to 4, no difference between 4 and 6, and a slight drop when items increased to 8. In the emotion-focus condition, both groups showed the same pattern they had shown in the information-focus condition, except that, for the younger group, there was a dramatic drop when items increased to 8.
So that’s one point: that the effect of cognitive load is modified by instructional condition, and varies by age.
The other point, of course, concerns how level of performance varies. Interestingly, in the control condition, the two age groups performed at a similar level. In the information-focus condition, the slight superiority of the younger group when the load was lightest expanded significantly as soon as the number of items increased to four, and was greatest at the highest load. In the emotion-focus condition, however, the very slight superiority of the younger group at two items did not increase as the load increased, and indeed reversed when the load increased to eight.
Here’s what I think are the most interesting results of this study:
There was no significant difference in performance between the age groups when no instruction was given.
Younger adults were better off being given some instruction, but when the cognitive load was not too great (2, 4, 6 items), there was no difference for them in focusing on emotions or details. The difference — and it was a significant one — came when the load was highest. At this point, they were much better to concentrate on the details and apply their reasoning abilities.
Older adults, on the other hand, were better off, always but especially when the load was highest, in focusing on their feelings.
Performance on a digit-symbol coding task (a measure of processing speed) correlated significantly with performance in the information-focus condition for both age groups. When processing speed was taken into account, the difference between the age groups in that condition disappeared. In other words, younger adults' superior performance in the information-focus condition was entirely due to their higher processing speed. However, age differences in the emotion-focus condition were unaffected.
Younger adults performed significantly better in the information-focus condition compared to the control condition, indicating that specific instructions are helpful. However, there was no significant difference between the emotion-focus condition and the control for the older adults, suggesting perhaps that such processing is their ‘default’ approach.
The findings add weight to the idea that there is a separate working memory system for emotion-based information.
It should be noted that, somewhat unusually, the information was presented to participants sequentially rather than simultaneously. It may well be that these results do not apply to the situation in which you have all the necessary information presented to you in a document and can consider it at your leisure. On the other hand, in the real world we often amass information over time, or acquire it by listening rather than seeing it all nicely arrayed in front of us.
The findings suggest that the current emphasis on providing patients with all available information in order to make an “informed choice” may be misplaced. Many older patients may be better served by a greater emphasis on emotional information, rather than being encouraged to focus on myriad details.
But I'd like to see this experiment replicated using a simultaneous presentation. It may be that these findings should principally be taken as support for always seeking written documentation to back up spoken advice, or, if you're gathering information over time and from multiple sources, making sure you have written notes for each instance. Personally, I dislike making any decisions based solely on information given in conversation, and this is a reluctance I have found increasing steadily with age (and I'm not that old yet!).
Mikels, J.A., Löckenhoff, C.E., Maglio, S.J., Carstensen, L.L., Goldstein, M.K. & Garber, A. 2010. Following your heart or your head: Focusing on emotions versus information differentially influences the decisions of younger and older adults. Journal of Experimental Psychology: Applied, 16(1), 87-95.
I have previously reported on how gait and balance problems have been associated with white matter lesions, and walking speed and grip strength have been associated with dementia and stroke risk. Another recent study, involving 93 older adults (70+) has added to this evidence, with the finding that those with non-amnestic MCI were much more likely to be slow walkers.
The study involved 54 seniors with no cognitive impairment, 31 with non-amnestic MCI and eight with amnestic MCI. Passive infrared sensors fixed in series on the ceilings of participants’ homes enabled their walking speed to be monitored unobtrusively over a three-year period.
Those with non-amnestic MCI were nine times more likely to be slow walkers than moderate or fast walkers, and more likely to show greater variability in walking speed.
Unfortunately, I have not been able to read the full paper (which is why I’m not reporting this in news), so I can’t tell you any more details. I assume that the main reason for the failure to find a significant difference in the amnestic MCI group was because that group was so small, but I don’t know.
Nevertheless, the study does add to the growing evidence of an association between gait and balance problems and risk of cognitive impairment and dementia, which is why I was interested to read a recent paper on entraining walking using a metronomic beat.
The paper spoke about the use of sensory cues in neurological rehabilitation. Specifically, auditory cues have been shown to help various gait characteristics of patients with Parkinson's disease and stroke. In patients with Parkinson’s, visual cues also improved stride length, while auditory cues improved cadence.
So here’s the question: if you are having gait and/or balance problems, will improving them also reduce your risk of developing cognitive problems? Or are the physical problems merely the consequence of physical deterioration in the brain that also lead to cognitive problems?
I’ve raised the same question before in relation to sensory deterioration. My answer then is the same answer I give now: you shouldn’t ignore these physical problems as something that is simply inevitable with age and/or poor health. As with sensory impairment, there are two ways in which restricted physical movement might impact your cognition.
One is the physical damage in the brain I have spoken of. Whether or not you can reverse some of this damage (or at least counteract it by developing some other area of the brain) by improving gait, balance, or grip strength, is a question as yet unanswered. But it is possible, and for that reason should be tried.
The other way is through the effect of restricted physical movement on your activities, and your state of mind. Research suggests that restricting your environment is a risk factor in developing cognitive impairment. Similarly, social engagement and cognitively-stimulating activities are both important for preventing cognitive decline, and while physical frailty doesn’t necessarily limit these, it does make it much more likely that they will be restricted.
State of mind is associated with attitude, and I have spoken before (often!) about the effect of this on cognition. If you believe that life is ‘over’ for you, that you are sliding rapidly down the hill and there is nothing you can do about it, then your belief will make that true. Physical frailty is, understandably, going to make that belief more likely. Contrariwise, if you succeed in reducing your frailty, in being able once again to do some tasks that you thought you would never be able to do again, then you are much more likely to take action in fighting cognitive decline.
So, it’s worth tackling walking problems — and worth making your best efforts to ensure that they don’t happen, by keeping fit and active. The use of sensory cues to help gait problems probably requires some specialist assistance. Another approach is by practicing tai ch’i, which is generally recommended as an activity for improving balance.
When we are presented with new information, we try and connect it to information we already hold. This is automatic. Sometimes the information fits in easily; other times the fit is more difficult — perhaps because some of our old information is wrong, or perhaps because we lack some of the knowledge we need to fit them together.
When we're confronted by contradictory information, our first reaction is usually surprise. But if the surprise continues, with the contradictions perhaps increasing, or at any rate becoming no closer to being resolved, then our emotional reaction turns to confusion.
Confusion is very common in the learning process, despite most educators thinking that effective teaching is all about minimizing, if not eliminating, confusion.
But recent research has suggested that confusion is not necessarily a bad thing. Indeed, in some circumstances, it may be desirable.
I see this as an example of the broader notion of ‘desirable difficulty’, which is the subject of my current post. But let’s look first at this recent study on confusion for learning.
In the study, students engaged in ‘trialogues’ involving themselves and two animated agents. The trialogues discussed possible flaws in a scientific study, and the animated agents took the roles of a tutor and a student peer. To get the student thinking about what makes a good scientific study, the agents disagreed with each other on certain points, and the student had to decide who was right. On some occasions, the agents made incorrect or contradictory statements about the study.
In the first experiment, involving 64 students, there were four opportunities for contradictions during the discussion of each research study. Because the overall levels of student confusion were quite low, a second experiment, involving 76 students, used a delayed manipulation, where the animated agents initially agreed with each other but eventually started to express divergent views. In this condition, students were sometimes then given a text to read to help them resolve their confusion. It was thought that, given their confusion, students would read the text with particular attention, and so improve their learning.
In both experiments, on those trials which genuinely confused the students, those students who were initially confused by the contradiction between the two agents did significantly better on the test at the end.
A side-note: self-reports of confusion were not very sensitive, and students’ responses to forced-choice questions following the contradictions were more sensitive at inferring confusion. This is a reminder that students are not necessarily good judges of their own confusion!
The idea behind all this is that, when there’s a mismatch between new information and prior knowledge, we have to explore the contradictions more deeply — make an effort to explain the contradictions. Such deeper processing should result in more durable and accessible memory codes.
Such a mismatch can occur in many, quite diverse contexts — not simply in the study situation. For example, unexpected feedback, anomalous events, obstacles to goals, or interruptions of familiar action sequences, all create some sort of mismatch between incoming information and prior knowledge.
However, all instances of confusion aren’t necessarily useful for learning and memory. They need to be relevant to the activity, and of course the individual needs to have the means to resolve the confusion.
As I said, I see a relationship between this idea of the right level and type of confusion enhancing learning, and the idea of desirable difficulty. I’ve talked before about the ‘desirable difficulty’ effect (see, for example, Using 'hard to read' fonts may help you remember more). Both of these ideas, of course, connect to a much older and more fundamental idea: that of levels of processing. The idea that we can process information at varying levels, and that deeper levels of processing improve memory and learning, dates back to a paper written in 1972 by Craik and Lockhart (although it has been developed and modified over the years), and underpins (usually implicitly) much educational thinking.
But it’s not so much this fundamental notion that deeper processing helps memory and learning, and certain desirable difficulties encourage deeper processing, that interests me as much as idea of getting the level right.
Too much confusion is usually counter-productive; too much difficulty the same.
Getting the difficulty level right is something I have talked about in connection with flow. On the face of it, confusion would seem to be counterproductive for achieving flow, and yet ... it rather depends on the level of confusion, don't you think? If the student has clear paths to follow to resolve the confusion, the information flow doesn't need to stop.
This idea also, perhaps, has connections to effective practice principles — specifically, what I call the ‘Just-in-time rule’. This is the principle that the optimal spacing for your retrieval practice depends on you retrieving the information just before you would have forgotten it. (That’s not as occult as it sounds! But I’m not here to discuss that today.)
It seems to me that another way of thinking about this is that you want to find that moment when retrieval of that information is at the ‘right’ level of difficulty — neither too easy, nor too hard.
Successful teaching is about shaping the information flow so that the student experiences it — moment by moment — at the right level of difficulty. This is, of course, impossible in a factory-model classroom, but the mechanics of tailoring the information flow to the individual are now made possible by technology.
But technology isn't the answer on its own. To achieve optimal results, it helps if the individual student is aware that the success of their learning depends on (or will at least be more effective — for some will be successful regardless of the inadequacy of the instruction) managing the information flow. Which means they need to provide honest feedback, they need to be able to monitor their learning and recognize when they have ‘got’ something and when they haven’t, and they need to understand that if one approach to a subject isn’t working for them, then they need to try a different one.
Perhaps this provides a different perspective for some of you. I'd love to hear of any thoughts or experiences teachers and students have had that bear on these issues.
One of the points I mention in my book on notetaking is that the very act of taking notes helps us remember — it’s not simply about providing yourself with a record. There are a number of reasons for this, but a recent study bears on one of them. The researchers were interested in whether physically writing by hand has a different effect than typing on a keyboard.
In a fascinating experiment, adults were asked to learn to write in an unknown alphabet, with around twenty letters. One group was taught to write by hand, while another group used a keyboard. Participants were tested on their fluency and recall after three and six weeks. Those who had learned the letters by handwriting were significantly better on all tests. Moreover, Broca's area, a brain region involved in language, was active when this group were recognizing the letters, but not among those who had learned by typing on a keyboard.
The findings point to the importance of sensorimotor processes in processes we have typically regarded as primarily intellectual.
I recently reported on another finding concerning handwriting — that the memory-blocking effect of exam anxiety could be overcome by the simple strategy of writing out your anxieties just before the exam. It’s also interesting in this context to remember the research into the benefits of gesturing for reducing the load on your working memory, with consequent assistance for memory, learning and comprehension. The writing effect on exam anxiety is also thought to be related to reducing the load on working memory.
In the case of this latest study, it seems likely that the benefits have more to do with the increased focus on the shape of the letters that occurs when writing by hand, and with the intimate connection between reading and writing.
But the message of these different studies is the same: that we ignore the physical at our peril; that cognition is “embodied cognition”, rooted in our bodies in ways we are only beginning to understand.
Mangen, A. & Velay, J. (2010). Digitizing Literacy: Reflections on the Haptics of Writing, Advances in Haptics, Mehrdad Hosseini Zadeh (Ed.), InTech, Press release at https://www.eurekalert.org/pub_releases/2011-01/uos-blt011911.php
The limitations of working memory have implications for all of us. The challenges that come from having a low working memory capacity are not only relevant for particular individuals, but also for almost all of us at some points of our lives. Because working memory capacity has a natural cycle — in childhood it grows with age; in old age it begins to shrink. So the problems that come with a low working memory capacity, and strategies for dealing with it, are ones that all of us need to be aware of.
Today, I want to talk a little about the effect of low working memory capacity on reading comprehension.
A recent study involving 400 University of Alberta students found that 5% of them had reading comprehension difficulties. Now the interesting thing about this is that these were not conventionally poor readers. They could read perfectly well. Their problem lay in making sense of what they were reading. Not because they didn’t understand the words or the meaning of the text. Because they had trouble remembering what they had read earlier.
Now these were good students — they had at least managed to get through high school sufficiently well to go to university — and many of them had developed useful strategies for helping them with this task: highlighting, making annotations in the margins of the text, and so on. But it was still very difficult for them to get hold of the big picture — seeing and understanding the text as a whole.
This is more precisely demonstrated in a very recent study that required 62 undergraduates to read a website on the taxonomy of plants. Now this represents a situation that is much more like a real-world study scenario, and one that has, as far as I know, been little studied: namely, drawing together information from multiple documents.
In this experiment, the multiple documents were represented by 24 web pages. Each page discussed a different part of the plant taxonomy. The website as a whole was organized according to a four-level hierarchical tree structure, where the highest level covered the broadest classes of plants (“Plants”), and the lowest, individual species. However — and this is the important point — there was no explicit mention of this organization, and you could navigate only one link up or down the tree, not sideways. Participants entered the site at the top level.
After pretesting, to assess WMC and prior plant knowledge, the students were given 18 search questions. Participants were asked both to read the site and answer the questions. They were given 25 minutes to do so, after which they completed a post-test similar to their pre-test of prior knowledge: (1) placing the eight terms found in the first three levels on the hierarchical tree (tree construction task); (2) selecting the correct two items from a list of five, that were subordinates to a given item (matching task).
Neither WMC nor prior knowledge affected performance on the search task. Neither WMC nor prior knowledge (nor indeed performance on the search task) directly affected performance on the post-test matching task, indicating that learning simple factual knowledge is not affected by your working memory capacity or how much relevant knowledge you have (remember though, that this was a very simple and limited amount of new knowledge).
But, WMC did significantly affect understanding of the hierarchical structure (assessed by the tree construction task). Prior knowledge did not.
These findings don’t only tell us about the importance of WMC for seeing the big picture, they also provide some evidence of what underlies that, or at least what doesn’t. The findings that WMC didn’t affect the other tasks argues against the idea that high WMC individuals may be benefiting from a faster reading speed, or that they are better at making local connections, or that they can cope better at doing multiple tasks. WMC didn’t affect performance on the search questions, and it didn’t affect performance on the matching task, which tested understanding of local connections. No, the only benefit of a high WMC was in seeing global connections that had not been made explicitly.
Let’s go back to the first study for a moment. Many of the students having difficulties apparently did use strategies to help them deal with their problem, but their strategy use obviously wasn’t enough. I suspect part of the problem here, is that they didn’t really realize what their problem was (and you can’t employ the best strategies if you don’t properly understand the situation you’re dealing with!).
This isn’t just an issue for people who lack the cognitive knowledge and the self-knowledge (“metacognition”) to understand their intrinsic problem. It’s also an issue for adults whose working memory capacity has been reduced, either through age or potentially temporary causes such as sleep deprivation or poor health. In these cases, it’s easy to keep on believing that ways of doing things that used to work will continue to be effective, not realizing that something fundamental (WMC) has changed, necessitating new strategies.
So, let’s get to the burning question: how do you read / study effectively when your WMC is low?
The first thing is to be aware of how little you can hold in your mind at one time. This is where paragraphs are so useful, and why readability is affected by length of paragraphs. Theoretically (according to ‘best practice’), there should be no more than one idea per paragraph. The trick to successfully negotiating the hurdle of lengthy texts lies in encapsulation, and like most effective strategies, it becomes easier with practice.
Rule 1: Reduce each paragraph to as concise a label as you can.
Remember: “concise” means not simply brief, but rather, as brief as it can be while still reminding you of all the relevant information that is encompassed in the text. This is about capturing the essence.
Yes, it’s an art, and to do it well takes a lot of practice. But you don’t have to be a master of it to benefit from the strategy.
The next step is to connect your labels. This, of course, is a situation where a mind map-type strategy is very useful.
Rule 2: Connect your labels.
If you are one of those who are intimidated by mind maps, don’t be alarmed. I said, “mind map-type”. All you have to do is write your labels (I call them labels to emphasize the need for brevity, but of course they may be as long as a shortish sentence) on a sheet of paper, preferably in a loose circle so that you can easily draw lines between them. You should also try to write something by these lines, to express your idea of the connection. These labels will also provide a more condensed label for the ideas being connected. You can now make connections between these labels and the others.
The trick is to move in small steps, but not to stay small. Think of the process as a snowball, gathering ideas and facts as it goes, getting (slowly) bigger and bigger. Basically, it’s about condensing and connecting, until you have everything densely connected, and the information getting more and more condensed, until you see the whole picture, and understand the essence of it.
Another advantage of this method is that you will have greatly increased your chances of remembering it in the long-term!
In a situation similar to that of the second study — assorted web pages — you want to end up with a tight cluster of labels for each page, the whole of which is summed up by one single label.
What all this means for teachers, writers of text books, and designers of instructional environments, is that they should put greater effort into making explicit global connections — the ‘big picture’.
A final comment about background knowledge. Notwithstanding the finding of the second study that there was no particular benefit to prior knowledge, the other part of this process is to make connections with knowledge you already have. I’d remind you again that that study was only testing an extremely limited knowledge set, and this greatly limits its implications for real-world learning.
I have spoken before of how long-term memory can effectively increase our limited WMC (regardless of whether your WMC is low or high). Because long-term memory is essentially limitless. But information in it varies in its accessibility. It is only the readily accessible information that can bolster working memory.
So, there are two aspects to this when it comes to reading comprehension. The first is that you want any relevant information you have in LTM to be ‘primed’, i.e. reading and waiting. The second is that you are obviously going to do better if you actually have some relevant information, and the more the better!
This is where the educational movement to ‘dig deep not broad’ falls down. Now, I am certainly not arguing against this approach; I think it has a lot of positive aspects. But let’s not throw out the baby with the bathwater. A certain amount of breadth is necessary, and this of course is where reading truly comes into its own. Reading widely garners the wide background knowledge that we need — and those with WMC problems need in particular — to comprehend text and counteract the limitations of working memory. Because reading widely — if you choose wisely — builds a rich database in LTM.
We say: you are what you eat. Another statement is at least as true: we are what we read.
Press release on the first study (pdf, cached by Google)
Second study: Banas, S., & Sanchez, C. a. (2012). Working Memory Capacity and Learning Underlying Conceptual Relationships Across Multiple Documents. Applied Cognitive Psychology, n/a-n/a. doi:10.1002/acp.2834
A brief round-up of a few of the latest findings reinforcing the fact that academic achievement is not all about academic ability or skills.Most of these relate to the importance of social factors.
Improving social belonging improves GPA, well-being & health, in African-American students
From Stanford, we have a reminder of the effects of stereotype threat, and an interesting intervention that ameliorated it. The study involved 92 freshmen, of whom 49 were African-American, and the rest white. Half the participants (none of whom were told the true purpose of the exercise) read surveys and essays written by upperclassmen of different ethnicities describing the difficulties they had fitting in during their first year at school. The other subjects read about experiences unrelated to a sense of belonging. The treatment subjects were then asked to write essays about why they thought the older college students' experiences changed, with illustrations from their own lives, and then to rewrite their essays into speeches that would be videotaped and could be shown to future students.
The idea of this intervention was to get the students to realize that everyone, regardless of race, has difficulty adjusting to college, and has times when they feel alienated or rejected.
While this exercise had no apparent effect on the white students, it had a significant impact on the grades and health of the black students. Grade point averages went up by almost a third of a grade between their sophomore and senior years, and 22% of them landed in the top 25% of their graduating class, compared to about 5% of black students who didn't participate in the exercise.
Moreover, the black students in the treatment group reported a greater sense of belonging compared to their peers in the control group; they were happier, less likely to spontaneously think about negative racial stereotypes, and apparently healthier (3 years after the intervention, 28% had visited a doctor recently, vs 60% in the control group).
Protecting against gender stereotype threat Stereotype threat is a potential factor for gender as well as ethnicity.
I’ve reported on a number of studies showing that reminding women or girls of gender stereotypes in math results in poorer performance on subsequent math tests. A new study suggests that women could be “inoculated” against such effects if their math / science class is taught by a woman. Although in these experiments, women’s academic performance didn’t suffer, their engagement and commitment to their STEM major was significantly affected.
In the first study, 72 women majoring in STEM subjects were given several tests measuring their implicit and explicit attitudes towards math vs English, plus a short but difficult math test. Half the students were (individually) tested by a female peer expert, supposedly double majoring in math and psychology, and half by a male peer. Those with a male showed negative implicit attitudes towards math, while those tested by a female showed equal liking for math and English on an implicit attitudes test. Similarly, women implicitly identified more with math in the presence of the female expert. On the math test, women who met the female attempted more problems (an average of 7.73 out of 10 compared to 6.39). There was no effect on performance — but because of the difficulty of the test, there was a floor effect.
In the second study, 101 women majoring in engineering were given short biographies of 5 engineers, who were either male or female, or descriptions of engineering innovations (control condition). Again, women presented with female engineers showed equal preference for math and English in the subsequent implicit attitudes test, while those presented with male engineers or innovations showed a significant implicit negative attitude to math. However, implicit identification with math wasn’t any stronger after reading about female engineers. However, those who read about female engineers did report greater intentions to pursue an engineering career, and this was mediated by greater self-efficacy in engineering. Again, there was no effect on explicit attitudes toward math.
In the third study, the performance of 42 female and 49 male students in introductory calculus course sections taught by male (8 sections) and female instructors (7 sections) were compared. Professors were yoked to same-sex teaching assistants.
As with the earlier studies, female students implicitly liked math and English equally when the teacher was a women, but had a decidedly more negative attitude toward math when their instructor was a man. Male students were unaffected by teacher gender. Similarly, female showed greater implicit identification with math when their teacher was a woman; male students were unaffected. Female students also expected better grades when their teacher was a woman; male students didn’t differ as a function of teacher gender (it should be noted that this wasn’t because they thought the women would be more generous markers; marking was pooled across all the instructors, and the students knew this). There was no effect of teacher gender on final grade (but there was a main effect of student gender: women outperformed men).
In other words, the findings of the 3rd study confirmed the effects on implicit attitudes towards STEM subjects, and demonstrated that male students were unaffected by the interventions that affected female students.
Now we come to engagement. At the beginning of the semester, female students were much less likely than male students (9% vs. 23%) to respond to questions put to the class, but later on, female students in sections led by women were much more likely to respond to such questions than were women in courses taught by men (46% vs 7%). Interestingly, more male students also responded to questions posed by female instructors (42% vs 26%). That would seem to suggest that male instructors are much more likely to engage in strategies that discourage many students from engaging in the class. But undeniably, women are more affected by this.
Additionally, at the beginning of the courses, around the same number of female students approached their instructors, regardless of their gender (12-13%). But later, while this percentage of female students approaching female instructors stayed constant, none of them approached male instructors. This could be taken to mean male instructors consistently discouraged such behavior, but male students did not change (an average of 7% both at Time 1 and Time 2).
The number of students who asked questions in class did not vary over time, or by student gender. However it did vary by teacher gender: 22% of both male and female students asked questions in class when they were taught by women, while only 15% did so in courses taught by men.
Some of these effects then seem to indicate that male college instructors are more inclined to discourage student engagement. What the effects of that are, remains to be seen.
Social and emotional learning programs found to boost student improvement
A review of 213 school programs that enhance students' social and emotional development, has found that such programs not only significantly improved social and emotional skills, caring attitudes, and positive social behaviors, but also resulted in significant improvement on achievement tests (although only a small subset of these programs actually looked at this aspect, the numbers of students involved were very large).
The average improvement in grades and standardized-test scores was 11 percentile points —an improvement that falls within the range of effectiveness of academic interventions.
Boys need close friendships
Related to this perhaps (I looked but couldn’t find any gender numbers for the SEL programs), from the Celebration of Teaching and Learning Conference in New York, developmental psychologist Niobe Way argues that one reason why boys are struggling in school is that they are experiencing a "crisis of connection." Stereotypical notions of masculinity, that emphasize separation and independence, challenge their need for close friendships. She's found that many boys have close friendships that are being discouraged by anxiety about being seen as gay or effeminate.
Way says that having close friendships is linked to better physical and mental health, lower rates of drug use and gang membership, and higher levels of academic achievement and engagement. When asked, she encouraged teachers to allow boys to sit next to their best friends in class.
High rate of college students with unrecognized hearing loss
On a completely different note, a study involving 56 college students has found that fully a quarter of them showed 15 decibels or more of hearing loss at one or more test frequencies — an amount that is not severe enough to require a hearing aid, but could disrupt learning. The highest levels of high frequency hearing loss were in male students who reported using personal music players.
Walton, G. M., & Cohen G. L. (2011). A Brief Social-Belonging Intervention Improves Academic and Health Outcomes of Minority Students. Science. 331(6023), 1447 - 1451.
Stout, J. G., Dasgupta N., Hunsinger M., & McManus M. A. (2011). STEMing the tide: using ingroup experts to inoculate women's self-concept in science, technology, engineering, and mathematics (STEM). Journal of Personality and Social Psychology. 100(2), 255 - 270.
Durlak, J. A., Weissberg R. P., Dymnicki A. B., Taylor R. D., & Schellinger K. B. (2011). The Impact of Enhancing Students’ Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions. Child Development. 82(1), 405 - 432.
Le Prell, C. G., Hensley B. N., Campbell K. C. M., Hall J. W., & Guire K. (2011). Evidence of hearing loss in a ‘normally-hearing’ college-student population. International Journal of Audiology. 50(S1), S21-S31 - S21-S31.
A Scientific American article talks about a finding that refines a widely-reported association between self-regulation and academic achievement. This association relates to the famous ‘marshmallow test’, in which young children were left alone with a marshmallow, having been told that if they could hold off eating it until the researcher returns, they would get two marshmallows. The ability of the young pre-school children to wait has been linked to subsequent achievement at school, and indeed has been said to be as important as IQ.
The finding relates to other factors that might be involved in a child’s decision not to wait — specifically, children who live in an environment where anything they had could be taken away at any time, make a completely rational choice by not waiting.
Another recent study makes a wider point: the children in the classical paradigm don’t know how long they will have to wait. This, the researchers say, changes everything.
A survey of adults asked to imagine themselves in a variety of scenarios, in which they were told the amount of time they had been at an activity such as watching a movie, practicing the piano, or trying to lose weight, were asked how long they thought it would be until they reached their goal or the end. There were marked differences in responses depending on whether the scenario had a relatively well-defined length or was more ambiguous.
Now, this in itself is no surprise. What is a surprise is that, rather than the usual feeling that the longer you’ve waited the closer you are to the end, when you don’t know anything about when the outcome will occur, the reverse occurs: the longer you wait the more you think you’re getting farther and farther away from that outcome.
The researchers suggest that this changes the interpretation of the marshmallow test — not in terms of predicting ability to delay gratification, but in terms of the mechanism behind it. Rather than reflecting two opposing systems fighting it out (your passionate id at war with your calculating super-ego), waiting for a while then giving in may be perfectly rational behavior. It may not be about ‘running out’ of will-power at all.
According to this model, which fits the observed behavior, and which I have to say makes perfect sense to me, there are three factors that influence persistence:
- beliefs about time — which in this context has to do with how the predicted delay changes over time, i.e., do you believe that the remaining length of time is likely to be the same, shorter, or longer;
- perceived reward magnitude — how much more valuable the delayed reward is to you than the immediate reward;
- temporal discount rate — how much shorter time is valued.
A crucial point about temporal beliefs is that they can change as time passes. So, if you’re waiting for a bus, then the reasonable thing to believe is that, the longer you wait, the less time you will have left to wait. But what about if you’re waiting at a stop very late at night? In that case, the longer you wait, the more certain you might become that a bus will not in fact be coming for many hours. How about when you text someone? You probably start off expecting a reply right away, but the longer you wait the longer you expect to wait (if they’re not answering right away, it might be hours; they might not even see your text at all).
Another important aspect of these factors is that they are subjective (especially the last two), and will vary with an individual. This places ‘failures’ on differences in an individual’s temporal discount rate and perceived reward magnitude, rather than on poor self-control.
But what about the evidence that performance on this test correlates with later academic achievement? Well, temporal discount rate also appears to show ‘trait-like stability over time’, and has also been found to correlate with cognitive ability. Temporal discount rate, it seems to me, has a clear connection to motivation, and I have talked before about the way motivation can make a significant impact to someone’s IQ score or exam performance.
So maybe we should move away from worries about ‘self-control’, and start thinking about why some people put a higher value on short waiting times than others — how much of this is due to early experiences? what can we do about it?
We also need to think very hard about the common belief that persistence is always a virtue. If you’re waiting for a bus that hasn’t come after an hour, and it’s now one in the morning, your best choice is probably to give up and find some other means home.
Although persistence is often regarded as a virtue, misguided persistence can waste time and resources and can therefore defeat one's chances of success at superordinate goals . . . Rather than assuming that persistence is generally adaptive, the issue should be conceptualized as making judgments about when persistence will be effective and when it will be useless or even self-defeating. (Baumeister & Scher, 1988, pp. 12–13)
All of which is to say that, as with all human behavior, persistence (sometimes equated to ‘will-power’; sometimes to 'self-regulation') is a product of both the individual and the environment. If some children are doing well and others are not, perhaps you shouldn’t be attributing this to stable traits of the children, but to the way different children perceive the situation.
Nor is it only in the academic environment that these things matter. Our ability to delay gratification and our motivation are attributes that underlie our behavior and our success across our lives. If we turn these ‘attributes’ around and, instead of seeing them as personal traits, rather see them as dynamic attributes that reflect situational factors that interact with personal attributes, then we have a better chance of getting the results we want. If we can pinpoint perceived reward and temporal discount rate as critical factors in this individual — environment interaction, we know exactly what variables to consider and manipulate.
We are built to like simple solutions — a number, a label that we can pin on ourselves or another — but surely we have become sufficiently sophisticated that we can now handle more complex information? We need to move from considering people, whether ourselves or others, as independent agents acting in a vacuum, to considering them as part of an indissoluble organism — environment interacting unit. Let’s get away from a fixation on IQ scores, or SAT scores, or even complex multi-factorial scores, and realize those, even the most predictive ones, are only ever one part of the story. No one is the same person at every moment, and it’s time we took that point more seriously.
McGuire, J. T., & Kable, J. W. (2013). Rational Temporal Predictions Can Underlie Apparent Failures to Delay Gratification. Psychological Review, 120(2), 395–410. doi:10.1037/a0031910
Baumeister, R. F., & Scher, S. J. (1988). Self-defeating behavior patterns among normal individuals: Review and analysis of common self-destructive tendencies. Psychological Bulletin, 104, 3–22. doi:10.1037/ 0033-2909.104.1.3