Total Cognitive Burden

Because it holds some personal resonance for me, my recent round-up of genetic news called to mind food allergies. Now food allergies can be tricky beasts to diagnose, and the reason is, they’re interactive. Maybe you can eat a food one day and everything’s fine; another day, you break out in hives. This is not simply a matter of the amount you have eaten, the situation is more complex than that. It’s a function of what we might call total allergic load — all the things you might be sensitive to (some of which you may not realize, because on their own, in the quantities you normally consume, they’re no or little problem). And then there are other factors which make you more sensitive, such as time of month (for women), and time of day. Perhaps, in light of the recent findings about the effects of environmental temperature on multiple sclerosis, temperature is another of those factors. And so on.

Now, I am not a medical doctor, nor a neuroscientist. I’m a cognitive psychologist who has spent the last 20 years reading and writing about memory. But I have taken a very broad interest in memory and cognition, and the picture I see developing is that age-related cognitive decline, mild cognitive impairment, late-onset Alzheimer’s, and early-onset Alzheimer’s, represent places on a continuum. The situation does not seem as simple as saying that these all have the same cause, because it now seems evident that there are multiple causes of dementia and cognitive impairment. I think we should start talking about Total Cognitive Burden.

Total Cognitive Burden would include genetics, lifestyle and environmental factors, childhood experience, and prenatal factors.

First, genetics.

It is estimated that around a quarter of Alzheimer’s cases are familial, that is, they are directly linked to the possession of specific gene mutations. For the other 75%, genes are likely to be a factor but so are lifestyle and environmental factors. Having said that, the most recent findings suggest that the distinction between familial and sporadic is somewhat fuzzy, so perhaps it would be fairer to say we term it familial when genetics are the principal cause, and sporadic when lifestyle and environmental factors are at least as important.

While three genes have been clearly linked to early-onset Alzheimer’s, only one gene is an established factor in late-onset Alzheimer’s — the so-called Alzheimer’s gene, the e4 allele on the APOE gene (at 19q13.2). It’s estimated that 40-65% of Alzheimer’s patients have at least one copy of this allele, and those with two copies have up to 20 times the risk of developing Alzheimer’s. Nevertheless, it is perfectly possible to have this allele, even two copies of it, and not develop the disease. It is also quite possible — and indeed a third of Alzheimer’s patients have managed it — to develop Alzheimer’s in the absence of this risky gene variant.

A recent review selected 15 genes for which there is sufficient evidence to associate them with Alzheimer’s: APOE, CLU, PICALM, EXOC3L2, BIN1, CR1, SORL1, TNK1, IL8, LDLR, CST3, CHRNB2, SORCS1, TNF, and CCR2. Most of these are directly implicated in cholesterol metabolism, intracellular transport of beta-amyloid precursor, and autophagy of damaged organelles, and indirectly in inflammatory response.

For example, five of these genes (APOE; LDLR; SORL1; CLU; TNF) are implicated in lipid metabolism (four in cholesterol metabolism). This is consistent with evidence that high cholesterol levels in midlife is a risk factor for developing Alzheimer’s. Cholesterol plays a key role in regulating amyloid-beta and its development into toxic oligomers.

Five genes (PICALM; SORL1; APOE; BIN1; LDLR) appear to be involved in the intracellular transport of APP, directly influencing whether the precursor proteins develop properly.

Seven genes (TNF; IL8; CR1; CLU; CCR2; PICALM; CHRNB2) were found to interfere with the immune system, increasing inflammation in the brain.

If you’re interested you can read more each of these genes in that review, but the point I want to make is that genes can’t be considered alone. They interact with each other, and they interact with other factors (for example, there is some evidence that SORL1 is a risk factor for women only; if you have always kept your cholesterol levels low, through diet and/or drugs, having genes that poorly manage cholesterol will not be so much of an issue). It seems reasonable to assume that the particular nature of an individual’s pathway to Alzheimer’s will be determined by the precise collection of variants on several genes; this will also help determine how soon and how fast the Alzheimer’s develops.

[I say ‘Alzheimer’s’, but Alzheimer’s is not, of course, the only path to dementia, and vascular dementia in particular is closely associated. Moreover, my focus on Alzheimer’s isn’t meant to limit the discussion. When I talk about the pathway to dementia, I am thinking about all these points on the continuum: age-related cognitive decline, mild cognitive impairment, senile dementia, and early dementia.]

It also seems plausible to suggest that the precise collection of relevant genes will determine not only which drug and neurological treatments might be most effective, but also which lifestyle and environmental factors are most important in preventing the development of the disease.

I have reported often on lifestyle factors that affect cognitive decline and dementia — factors such as diet, exercise, intellectual and social engagement — factors that may mediate risk through their effects on cardiovascular health, diabetes, inflammation, and cognitive reserve. We are only beginning to understand how childhood and prenatal environment might also have effects on cognitive health many decades later — for example, through their effects on head size and brain development.

You cannot do anything about your genes, but genes are not destiny. You cannot, now, do anything about your prenatal environment or your early years (but you may be able to do something about your children’s or your grandchildren’s). But you can, perhaps, be aware of whether you have vulnerabilities in these areas — vulnerabilities which will add to your Total Cognitive Burden. More easily, you can assess your lifestyle — over the course of your life — in these terms. Here are the sorts of questions you might ask yourself:

Do you have any health issues such as diabetes, cardiovascular disease, multiple sclerosis, positive HIV status?

Do you have a sleep disorder?

Have you, at any point in your life, been exposed to toxic elements (such as lead or severe air pollution) for a significant length of time?

Did you experience a lot of stress in childhood? Stress might come from a dangerous living environment (such as a violent neighborhood), warring parents, a dysfunctional parent, or a personally traumatic event (to take some examples).

Did you do a lot of drugs, or indulge in binge drinking, in college?

Have you spent many years eating an unhealthy diet — one heavy in fats and sugars?

Do you drink heavily?

Do you have ongoing stress in your life, or have experienced significant amounts of stress at some period during middle-age?

Do you rarely engage in exercise?

Do you spend most evenings blobbed out in front of the TV?

Do you experience little in the way of mental stimulation from your occupation or hobbies?

These questions are just off the top of my head, the ones that came most readily to mind. But they give you, I hope, some idea of the range of factors that might go to make up your TCB. The next step from there is to see what factors you can do something about. While you can’t do anything about your past, the good news is that, at any age, some benefit accrues from engaging in preventative strategies (such as improving your sleeping, reducing your stress, eating healthily, exercising regularly, engaging in mentally and socially stimulating activities). How much benefit will depend on how much effort you put into these preventative strategies, and on which and how many TCB factors are pushing you and how far you are along on the path. But it’s never too late to do something.

On the up-side, you might be relieved by such an exercise, realizing that your risk of dementia is smaller than you feared! If so, you might use this knowledge to motivate you to aspire to an excellent old age — with no cognitive decline. We tend to assume that declining faculties are an inevitable consequence of getting older, but this doesn’t have to be true. Some ‘super-agers’ have shown us that it is possible to grow very old and still perform as well as those decades younger. If your TCB is low, why don’t you make it even lower, and aspire to be one of those!

Diabetes - its role in cognitive impairment & dementia

There was an alarming article recently in the Guardian newspaper. It said that in the UK, diabetes is now nearly four times as common as all forms of cancer combined. Some 3.6 million people in the UK are thought to have type 2 diabetes (2.8 are diagnosed, but there’s thought to be a large number undiagnosed) and nearly twice as many people are at high risk of developing it. The bit that really stunned me? Diabetes costs the health service roughly 10% of its entire budget. In north America, one in five men over 50 have diabetes. In some parts of the world, it’s said as much as a quarter of the population have diabetes or even a third (Nauru)! Type 2 diabetes is six times more common in people of South Asian descent, and three times in people of African and African-Caribbean origin.

Why am I talking about diabetes in a blog dedicated to memory and learning? Because diabetes, if left untreated, has a number of complications, several of which impinge on brain function.

For example, over half of those with type 2 diabetes will die of cardiovascular disease, and vascular risk factors not only increase your chances of heart problems and stroke (diabetes doubles your risk of stroke), but also of cognitive impairment and dementia.

Type 2 diabetes is associated with obesity, which can bring about high blood pressure and sleep apnea, both of which are cognitive risk factors.

Both diabetes and hypertension increases the chances of white-matter lesions in the brain (this was even evident in obese adolescents with diabetes), and the degree of white-matter lesions in the brain is related to the severity of age-related cognitive decline and increased risk of Alzheimer’s.

Mild cognitive impairment is more likely to develop into Alzheimer’s if vascular risk factors such as high blood pressure, diabetes, cerebrovascular disease and high cholesterol are present, especially if untreated. Indeed it has been suggested that Alzheimer’s memory loss could be due to a third form of diabetes. And Down syndrome, Alzheimer's, diabetes, and cardiovascular disease, have been shown to share a common disease mechanism.

So diabetes is part of a suite of factors that act on the heart and the brain.

But treatment of such risk factors (e.g. by using high blood pressure medicines, insulin, cholesterol-lowering drugs and diet control, giving up smoking or drinking) significantly reduces the risk of developing Alzheimer’s. Bariatric surgery has been found to improve cognition in obese patients. And several factors have been shown to make a significant difference as to whether a diabetic develops cognitive problems.

Older diabetics are more likely to develop cognitive problems if they:

  • have higher (though still normal) blood pressure,
  • have gait and balance problems,
  • report themselves to be in bad health regardless of actual problems (this may be related to stress and anxiety),
  • have higher levels of the stress hormone cortisol,
  • don’t manage their condition (poor glucose control),
  • have depression,
  • eat high-fat meals.

Glucose control / insulin sensitivity may be a crucial factor even for non-diabetics. A study involving non-diabetic middle-aged and elderly people found that those with impaired glucose tolerance (a pre-diabetic condition) had a smaller hippocampus and scored worse on tests for recent memory. And some evidence suggests that a link found between midlife obesity and increased risk of cognitive impairment and dementia in old age may have to do with poorer insulin sensitivity.

Exercise and dietary changes are of course the main lifestyle factors that can turn such glucose impairment around, and do wonders for diabetes too. In fact, a recent small study found that an extreme low-calorie diet (don’t try this without medical help!) normalized pre-breakfast blood sugar levels and pancreas activity within a week, and may even have permanently cured some diabetics after a couple of months.

Diabetes appears to affect two cognitive domains in particular: executive functioning and speed of processing.

You can read all the research reports on diabetes that I’ve made over the years in my new topic collection.

Neglect your senses at your cognitive peril!

Impaired vision is common in old age and even more so in Alzheimer’s disease, and this results not only from damage in the association areas of the brain but also from problems in lower-level areas. A major factor in whether visual impairment impacts everyday function is contrast sensitivity.

Contrast sensitivity not only slows down your perceiving and encoding, it also interacts with higher-order processing, such as decision-making. These effects may be behind the established interactions between age, perceptual ability, and cognitive ability. Such interactions are not restricted to sight — they’ve been reported for several senses.

In fact, it’s been suggested that much of what we regard as ‘normal’ cognitive decline in aging is simply a consequence of having senses that don’t work as well as they used to.

The effects in Alzheimer’s disease are, I think, particularly interesting, because we tend to regard any cognitive impairment here as inevitable and a product of pathological brain damage we can’t do anything much about. But what if some of the cognitive impairment could be removed, simply by improving the perceptual input?

That’s what some recent studies have shown, and I think it’s noteworthy not only because of what it means for those with Alzheimer’s and mild cognitive impairment, but also because of the implications for any normally aging person.

So let’s look at some of this research.

Let’s start with the connection between visual and cognitive impairment.

Analysis of data from the Health and Retirement Study and Medicare files, involving 625 older adults, found that those with very good or excellent vision at baseline had a 63% reduced risk of developing dementia over a mean follow-up period of 8.5 years. Those with poorer vision who didn’t visit an ophthalmologist had a 9.5-fold increased risk of Alzheimer disease and a 5-fold increased risk of mild cognitive impairment. Poorer vision without a previous eye procedure increased the risk of Alzheimer’s 5-fold. For Americans aged 90 years or older, 78% who kept their cognitive skills had received at least one previous eye procedure compared with 52% of those with Alzheimer’s disease.

In other words, if you leave poor vision untreated, you greatly increase your risk of cognitive impairment and dementia.

Similarly, cognitive testing of nearly 3000 older adults with age-related macular degeneration found that cognitive function declined with increased macular abnormalities and reduced visual acuity. This remained true after factors such as age, education, smoking status, diabetes, hypertension, and depression, were accounted for.

And a study comparing the performance of 135 patients with probable Alzheimer’s and 97 matched normal controls on a test of perceptual organization ability (Hooper Visual Organization Test) found that the VOT was sensitive to severity of dementia in the Alzheimer’s patients.

So let’s move on to what we can do about it. Treatment for impaired vision is of course one necessary aspect, but there is also the matter of trying to improve the perceptual environment. Let’s look at this research in a bit more detail.

A 2007 study compared the performance of 35 older adults with probable Alzheimer’s, 35 healthy older adults, and 58 young adults. They were all screened to exclude those with visual disorders, such as cataracts, glaucoma, or macular degeneration. There were significant visual acuity differences between all 3 groups (median scores: 20/16 for young adults; 20/25 for healthy older adults; 20/32 for Alzheimer’s patients).

Contrast sensitivity was also significantly different between the groups, although this was moderated by spatial frequency (normal contrast sensitivity varies according to spatial frequency, so this is not unexpected). Also unsurprisingly, the young adults outperformed both older groups at every spatial frequency, except at the lowest, where it was matched by that of healthy older adults. Similarly, healthy older adults outperformed Alzheimer’s patients at every frequency bar one — the highest frequency.

For Alzheimer’s patients, there was a significant correlation between contrast sensitivity and their cognitive (MMSE) score (except at the lowest frequency of course).

Participants carried out a number of cognitive/perceptual tasks: letter identification; word reading; unfamiliar-face matching; picture naming; pattern completion. Stimuli varied in their perceptual strength (contrast with background).

Letter reading: there were no significant differences between groups in terms of accuracy, but stimulus strength affected reaction time for all participants, and this was different for the groups. In particular, older adults benefited most from having the greatest contrast, with the Alzheimer’s group benefiting more than the healthy older group. Moreover, Alzheimer’s patients seeing the letters at medium strength were not significantly different from healthy older adults seeing the letters at low strength.

Word reading: here there were significant differences between all groups in accuracy as well as reaction time. There was also a significant effect of stimulus strength, which again interacted with group. While young adults’ accuracy wasn’t affected by stimulus strength, both older groups were. Again, there were no differences between the Alzheimer’s group and healthy older adults when the former group was at high stimulus strength and the latter at medium, or at medium vs low. That was true for both accuracy and reaction time.

Picture naming: By and large all groups, even the Alzheimer’s one, found this task easy. Nevertheless, there were effects of stimulus strength, and once again, the performance of the Alzheimer’s group when the stimuli were at medium strength matched that of healthy older adults with low strength stimuli.

Raven’s Matrices and Benton Faces: Here the differences between all groups could not in general be ameliorated by manipulating stimulus strength. The exception was with the Benton Faces, where Alzheimer’s patients seeing the medium strength stimuli matched the performance of healthy older adults seeing low strength stimuli.

In summary, then, for letter reading (reaction time), word reading (identification accuracy and reaction time), picture naming, and face discrimination, manipulating stimulus strength in terms of contrast was sufficient to bring the performance of individuals with Alzheimer’s to a level equal to that of their healthy age-matched counterparts.

It may be that the failure of this manipulation to affect performance on the Raven’s Matrices reflects the greater complexity of these stimuli or the greater demands of the task. However, the success of the manipulation in the case of the Benton Faces — a similar task with stimuli of apparently similar complexity — contradicts this. It may that the stimulus manipulation simply requires some more appropriate tweaking to be effective.

It might be thought that these effects are a simple product of making stimuli easier to see, but the findings are a little more complex than I’ve rendered them. The precise effect of the manipulation varied depending on the type of stimuli. For example, in some cases there was no difference between low and medium stimuli, in others no difference between medium and high; in some, the low contrast stimuli were the most difficult, in others the low and medium strength stimuli were equally difficult, and on one occasion high strength stimuli were more difficult than medium.

The finding that Alzheimer’s individuals can perform as well as healthy older adults on letter and word reading tasks when the contrast is raised suggests that the reading difficulties that are common in Alzheimer’s are not solely due to cognitive impairment, but are partly perceptual. Similarly, naming errors may not be solely due to semantic processing problems, but also to perceptual problems.

Alzheimer’s individuals have been shown to do better recognizing stimuli the closer the representation is to the real-world object. Perhaps it is this that underlies the effect of stimulus strength — the representation of the stimulus when presented at a lower strength is too weak for the compromised Alzheimer’s visual system.

All this is not to say that there are not very real semantic and cognitive problems! But they are not the sole issue.

I said before that for Alzheimer’s patients there was a significant correlation between contrast sensitivity and their MMSE score. This is consistent with several studies, which have found that dementia severity is correlated with contrast sensitivity at some spatial frequencies. This, and these experimental findings, suggests that contrast sensitivity is in itself an important variable in cognitive performance, and contrast sensitivity and dementia severity have a common substrate.

It’s also important to note that the manipulations of contrast were standard across the group. It may well be that individualized manipulations would have even greater benefits.

Another recent study comparing the performance of healthy older and younger adults and individuals with Alzheimer's disease and Parkinson's disease on the digit cancellation test (a visual search task used in the diagnosis of Alzheimer’s), found that increased contrast brought the healthy older adults and those with Parkinson’s up to the level of the younger adults, and significantly benefited Alzheimer’s individuals — without, however, overcoming all their impairment.

There were two healthy older adults control groups: one age-matched to the Alzheimer’s group, and one age-matched to the Parkinson’s group. The former were some 10.5 years older to the latter. Interestingly, the younger control group (average age 64) performed at the same level as the young adults (average age 20), while the older old control group performed significantly worse. As expected, both the Parkinson’s group and the Alzheimer’s group performed worse than their age-matched controls.

However, when contrast was individually tailored at the level at which the person correctly identified a digit appearing for 35.5 ms 80% of the time, there were no significant performance differences between any of the three control groups or the Parkinson’s group. Only the Alzheimer’s group still showed impaired performance.

The idea of this “critical contrast” comparison was to produce stimuli that would be equally challenging for all participants. It was not about finding the optimal level for each individual (and indeed, young controls and the younger old controls both performed better at higher contrast levels). The findings indicate that poorer performance by older adults and those with Parkinson’s is due largely to their weaker contrast sensitivity, but those with Alzheimer’s are also hampered by their impaired ability to conduct a visual search.

The same researchers demonstrated this in a real-world setting, using Bingo cards. Bingo is a popular activity in nursing homes, senior centers and assisted-living facilities, and has both social and cognitive benefits.

Varying cards in terms of contrast, size, and visual complexity found that all groups benefited from increasing stimulus size and decreasing complexity. Those with mild Alzheimer’s were able to perform at levels comparable to their healthy peers, although those with more severe dementia gained little benefit.

Contrast boosting has also been shown to work in everyday environments: people with dementia can navigate more safely around their homes when objects in it have more contrast (e.g. a black sofa in a white room), and eat more if they use a white plate and tableware on a dark tablecloth or are served food that contrasts the color of the plate.

There’s a third possible approach that might also be employed to some benefit, although this is more speculative. A study recently reported at the American Association for the Advancement of Science annual conference revealed that visual deficits found in individuals born with cataracts in both eyes who have had their vision corrected can be overcome through video game playing.

After playing an action video game for just 40 hours over four weeks, the patients were better at seeing small print, the direction of moving dots, and the identity of faces.

The small study (this is not, after all, a common condition) involved six people aged 19 to 31 who were born with dense cataracts in each eye. Despite these cataracts being removed early in life, such individuals still grow up with poorer vision, because normal development of the visual cortex has been disrupted.

The game required players to respond to action directly ahead of them and in the periphery of their vision, and to track objects that are sometimes faint and moving in different directions. Best results were achieved when players were engaged at the highest skill level they could manage.

Now this is quite a different circumstance to that of individuals whose visual system developed normally but is now degrading. However, if vision worsens for some time before being corrected, or if relevant activities/stimulation have been allowed to decline, it may be that some of the deficit is not due to damage as such, but more malleable effects. In the same way that we now say that cognitive abilities need to be kept in use if they are not to be lost, perceptual abilities (to the extent that they are cognitive, which is a great extent) may benefit from active use and training.

In other words, if you have perceptual deficits, whether in sight, hearing, smell, or taste, you should give some thought to dealing with them. While I don’t know of any research to do with taste, I have reported on several studies associating hearing loss with age-related cognitive impairment or dementia, and similarly olfactory impairment. Of particular interest is the research on reviving a failing sense of smell through training, which suggested that one road to olfactory impairment is through neglect, and that this could be restored through training (in an animal model). Similarly, I have reported, more than once, on the evidence that music training can help protect against hearing loss in old age. (You can find more research on perception, training, and old age, on the Perception aggregated news page.)

 

For more on the:

Bingo study: https://www.eurekalert.org/pub_releases/2012-01/cwru-gh010312.php

Video game study:

https://www.guardian.co.uk/science/2012/feb/17/videogames-eyesight-rare-eye-disorder

https://medicalxpress.com/news/2012-02-gaming-eyesight.html

References

(In order of mention)

Rogers MA, Langa KM. 2010. Untreated poor vision: a contributing factor to late-life dementia. American Journal of Epidemiology, 171(6), 728-35.

Clemons TE, Rankin MW, McBee WL, Age-Related Eye Disease Study Research Group. 2006. Cognitive impairment in the Age-Related Eye Disease Study: AREDS report no. 16. Archives of Ophthalmology, 124(4), 537-43.

Paxton JL, Peavy GM, Jenkins C, Rice VA, Heindel WC, Salmon DP. 2007. Deterioration of visual-perceptual organization ability in Alzheimer's disease. Cortex, 43(7), 967-75.

Cronin-Golomb, A., Gilmore, G. C., Neargarder, S., Morrison, S. R., & Laudate, T. M. (2007). Enhanced stimulus strength improves visual cognition in aging and Alzheimer’s disease. Cortex, 43, 952-966.

Toner, Chelsea K.;Reese, Bruce E.;Neargarder, Sandy;Riedel, Tatiana M.;Gilmore, Grover C.;Cronin-Golomb, A. 2011. Vision-fair neuropsychological assessment in normal aging, Parkinson's disease and Alzheimer's disease. Psychology and Aging, Published online December 26.

Laudate, T. M., Neargarder S., Dunne T. E., Sullivan K. D., Joshi P., Gilmore G. C., et al. (2011). Bingo! Externally supported performance intervention for deficient visual search in normal aging, Parkinson's disease, and Alzheimer's disease. Aging, Neuropsychology, and Cognition. 19(1-2), 102 - 121.

Should learning facts by rote be central to education?

Michael Gove is reported as saying that ‘Learning facts by rote should be a central part of the school experience’, a philosophy which apparently underpins his shakeup of school exams. Arguing that "memorisation is a necessary precondition of understanding", he believes that exams that require students to memorize quantities of material ‘promote motivation, solidify knowledge, and guarantee standards’.

Let’s start with one sturdy argument: "Only when facts and concepts are committed securely to the working memory, so that it is no effort to recall them and no effort is required to work things out from first principles, do we really have a secure hold on knowledge.”

This is a great point, and I think all those in the ‘it’s all about learning how to learn’ camp should take due notice. On the other hand, the idea that memorizing quantities of material by rote is motivating is a very shaky argument indeed. Perhaps Gove himself enjoyed doing this at school, but I’d suggest it’s only motivating for those who can do it easily, and find that it puts them ‘above’ many other students.

But let’s not get into critiquing Gove’s stance on education. My purpose here is to discuss two aspects of it. The first is the idea that rote memorization is central to education. The second is more implicit: the idea that knowledge is central to education.

This is the nub of the issue: to what extent should students be acquiring ‘knowledge’ vs expertise in acquiring, managing, and connecting knowledge?

This is the central issue of today’s shifting world. As Ronald Bailey recently discussed in Reason magazine, Half of the Facts You Know Are Probably Wrong.

So, if knowledge itself is constantly shifting, is there any point in acquiring it?

If there were simple answers to this question, we wouldn’t keep on debating the issue, but I think part of the answer lies in the nature of concepts.

Now, concepts / categories are the building blocks of knowledge. But they are themselves surprisingly difficult to pin down. Once upon a time, we had the simple view that there were ‘rules’ that defined them. A dog has four legs; is a mammal; barks; wags its tail … When we tried to work out the rules that defined categories, we realized that, with the exception of a few mathematical concepts, it couldn’t be done.

There are two approaches to understanding categories that have been more successful than this ‘definitional’ approach, and both of them are probably involved in the development of concepts. These approaches are known as the ‘prototypical’ and the ‘exemplar’ models. The key ideas are that concepts are ‘fuzzy’, hovering around a central (‘most typical’) prototype, and are built up from examples.

A child builds up a concept of ‘dog’ from the different dogs she sees. We build up our concept of ‘far-right politician’ from the various politicians presented in the media.

Some concepts are going to be ‘fuzzier’ (broader, more diverse) than others. ‘Dog’, if you think about St Bernards and Chihuahuas and greyhounds and corgis, has an astonishingly diverse membership; ‘banana’ is, for most of us, based on a very limited sample of banana types.

Would you recognize this bright pink fruit as a banana? Or this wild one? What about this dog? Or this?

I’m guessing the bananas surprised you, and without being told they were bananas, you would have guessed they were some tropical fruit you didn’t know. On the other hand, I’m sure you had no trouble at all recognizing those rather strange animals as dogs (adored the puli, I have to say!).

To the extent that you’ve experienced diversity in your category members, the concept you’ve built will be a strong one, capable of allowing you to categorize members quickly and accurately.

In my article on expertise, I list four important differences between experts and novices:

  • experts have categories

  • experts have richer categories

  • experts’ categories are based on deeper principles

  • novices’ categories emphasize surface similarities.

How did experts develop these deeper, richer categories? Saying, “10,000 hours of practice”, may be a practical answer, but it doesn’t tell us why number of hours is important.

One vital reason the practice is important is because it grants the opportunity to acquire a greater diversity of examples.

Diverse examples, diverse contexts, this is what is really important.

What does all this have to do with knowledge and education?

Expertise (a word I use to cover the spectrum of expertise, not necessarily denoting an ‘expert’) is rooted in good categories. Good categories are rooted in their exemplars. Exemplars may change — you may realize you’ve misclassified an exemplar; scientists may decree that an exemplar really belongs in a different category (a ‘fact’ is wrong) — but the categories themselves are more durable than their individual members.

I say it again: expertise is rooted in the breadth and usefulness of your categories. Individual exemplars may turn out to be wrong, but a good category can cope with that — bringing exemplars in and out is how a category develops. So it doesn’t matter if some exemplars need to be discarded; what matters is developing the category.

You can’t build a good category without experiencing lots of exemplars.

Although, admittedly, some of them are more important than others.

Indeed, every category may be thought of as having ‘anchors’ — exemplars that, through their typicality or atypicality, define the category in crucial ways. This is not to say that they are necessarily ‘set’ exemplars, required of the category. No, your anchors may well be different from mine. But the important thing is that your categories have such members, and that these members are well-rooted, making them quickly and reliably accessible.

Let’s take language learning as an example (although language learning is to some extent a special case, and I don’t want to take the analogy too far). There are words you need to know, basic words such as prepositions and conjunctions, high-frequency words such as common nouns and verbs. But despite lists of “Top 1000 words” and the like, these are fewer than you might think. Because language is very much a creature of context. If you want to read scientific texts, you’ll want a different set of words than if your interest lies in reading celebrity magazines, to take an extreme comparison.

What you need to learn is the words you need, and that is specific to your interests. Moreover, the best way of learning them is also an individual matter — and by ‘way’, I’m not (for a change) talking about strategies, which is a different issue. I’m talking about the contexts in which you experience the words you are learning.

For example, say you are studying genetics. There are crucial concepts you will need to learn — concepts such as ‘DNA’, ‘chromosomes’, ‘RNA’, epigenetics, etc — but there is no such requirement concerning the precise examples (exemplars) you use to acquire those concepts. More importantly, it is much better to cover a number of different examples that illuminate a concept, rather than focus on a single one (Mendel’s peas, I’m looking at you!).

Genetics is changing all the time, as we learn more and more. But that’s an argument for learning how to replace outdated information (an area of study skills sadly neglected!), not an argument for not learning anything in case it turns out to be wrong.

To understand a subject, you need to grasp its basic concepts. This is the knowledge part. To deal with the mutability of specific knowledge, you need to understand how to discard outdated knowledge. To deal with the amount of knowledge relevant to your studies and interests, you need skills in seeing what information is important and relevant for your studies and interests and in managing the information so that it is accessible when needed.

Accessibility is key. Whether you store the information in your own head or in an external storage device, you need to be able to lay hands on it when you need it. And here’s the nub of the problem: you need to know when you need it.

This problem is the primary reason why internal storage (in your own memory) is favored by many. It’s only too easy to file something away in external storage (physical files; computer documents; whatever) and forget that it’s there.

But what all this means is that what we really need in our memory is an index. We don’t need to remember what a deoxyribose sugar is if we can instantly look it up whenever we come across it.

Or do we?

This is the point, isn’t it? If you want to study a subject, you can’t be having to look up every second word in the text, you need to understand the concepts, the language. So you do need to have those core concepts well understood, and the technical vocabulary mastered.

So is this an argument for rote memorization?

No, because rote memorization is a poor strategy, suitable only for situations where there can be no understanding, no connection.

We learn by repetition, but rote repetition is the worst kind of repetition there is.

To acquire the base knowledge you need to build expertise, you need repetition through diverse examples. This is the art and craft of good instruction: providing the right examples, in the right order.

The changing nature of literacy. Part 4: Models & Literacies

This post is the fourth and last part in a four-part series on how education delivery is changing, and the set of literacies required in today’s world. Part 1 looked at textbooks; Part 2 at direct instruction/lecturing; Part 3 at computer learning.. This post looks at learning models and types of literacy.

 

Literacy. What does it mean?

Literacy is about being able to access information locked up in a code; it's also about being able to use that code. To be literate is to be able to read and write.

There's also another aspect of literacy that goes beyond mere decoding. This is about reading with understanding, with critical awareness.

Argument around the dangers of modern technology tends, in the way of arguments, to simplistically characterize the players: Internet = short, shallow; Social media = frivolous, distracting; Games = frivolous; Textbooks, Lectures = serious, deep, instructive.

But of course this is ridiculous even if we restrict ourselves to the learning context. Even social media have their uses. Even games can teach. And even textbooks and lectures can be shallow, or uninstructive, or inaccurate. (Indeed, way back in my first year of university I experienced a calculus lecturer who, I believe, reduced my understanding of calculus!)

The internet is, as we all recognize, a two-sided tool (but every tool is). Many people worry about the misinformation, the shallowness of much of the information, the superficiality of surfing, the way people might get stuck in a little corner that reinforces their vulnerabilities or prejudices, and so on.

We can say the same about infographics (data visualization, visual communication, call it what you will). It’s fostered as a way of helping us deal with the complexity and quantity of information (and I’m a big fan of it), but some people have criticized it for its potential for misinformation. Of course, text (wherever found) is far from pure in this respect!

But we don’t deal with misinformation by banning it (well, some of us don’t); we deal with it by providing the tools and the education so that people can recognize when something is being dangerously misleading or just plain wrong.

So, one of the important aspects of literacy (once you get beyond the decoding level) is being able to evaluate the information.

Why do we talk about digital literacy? Do we really need a new term (or terms)?

It comes down to skills. Because that is what literacy is: it's a skill (with all that that implies). And the new literacies do, undeniably, require new skills.

As far as the decoding aspect is concerned, well, text is still text. And textbooks have always included illustrations, so you could say that that is not new either. But that would be a mistake. The problem with visualizations is that it is not obvious that there's a skill to reading them — they're not as transparent as most believe (hence the misinformation claim). Humans have always used pictures to communicate; it is only recently that these have become sufficiently sophisticated to warrant the term 'language'.

So one of the modern literacies must be visual language, which like verbal language (and math and music), comes in different flavors. We wouldn’t use the same strategies to interpret and analyze a novel as we would a chemistry text, or a poem. We need to develop the same understanding of the taxonomy of visual language.

So I think we should include visual literacy in our new literacy set.

But of course, the new information delivery systems have requirements that go beyond content. Being able to use the code goes beyond reading text and pictures. It involves being able to navigate the delivery system. With a book, you just have to turn the pages. But with hyperspace, learning spaces, video-books, and so on, 'reading' is more complicated.

This is the important thing, the qualitative shift: the shift from linearity. Having a space, be it the whole of the internet or a confined learning space, in which you can go in many directions, in which there is no one path, may be empowering and richly layered, but it is not a place you can throw anyone into without training. Not if they are going to truly benefit from it. Like the need for visual literacy, this is another under-recognized need.

The complexity of these spaces and their navigation has, however, led to a number of useful distinctions being made — between digital literacy and computer literacy, information literacy, and media literacy (among others). Basically, these point to the need to distinguish between an ability to use technology (know the language of software — What’s a window? What’s the difference between a browser and a search engine? Do you hashtag your tweets? Do you use folders?) from the ability to find, filter, and evaluate information, and from the ability to actively participate in the information flow across media (Do you change your verbal style appropriately when you move from a tweet to a YouTube script to a written report to a comment on someone’s blog? Do you use different modes of analysis and evaluation when viewing different media?)

Given that we want students to become adept at all of these, how should we teach them?

In an interview, Will Richardson, a teacher whose experiences with interactive Web tools in the classroom led him to write Blogs, Wikis, Podcasts, and Other Powerful Web Tools for Classrooms talks about the need for teachers to have a visible presence on the Web, to participate in learning networks, and about how this openness is a huge culture shift for the closed shop of teachers. About network literacy as a key skill: “students should be able to create, navigate, and grow their own personal learning networks in safe, effective, and ethical ways. It’s really about the ability to engage with people around the world in these online networks, to take advantage of learning opportunities that are not restricted to a particular place and time, and to be conversant with the techniques and methodologies involved in doing this.” About how kids may be more technologically savvy, but they need help sorting out which information, and which people, to trust.

My favorite bit: he talks about Rethinking Education in the Age of Technology: The Digital Revolution and Schooling in America (Amazon affiliate link), which apparently discusses how historically we used to have an apprenticeship model of education, which moved to the factory model, where it’s all about training everyone the same way, and now we’re moving back to a more individualized, self-directed and flexible lifelong-learning model. Put in those terms, it seems clear that we can’t just keep tweaking; the changes are more fundamental than that.

He also asks why no one is consciously teaching kids how to read and write in linked environments — which relates back to my point about learning to traverse non-linear spaces.

But the onus shouldn't (and can't) be all on the teacher. They need a structure that supports them.

But as with learning networks and digital tools (Facebook, Twitter, blogs, RSS, Scribd, Flickr, TumblrMashable, ...), the structures too keep changing under their feet. Blackboard, Moodle, Udemy, Instructure (to pick out some old and some new).

It's perhaps easier when the structure is purely online. (In the U.S., the Keeping Pace with K-12 online learning 2010 report tells us that state virtual schools or state-led online learning initiatives now exist in 39 states, and 27 states plus Washington DC have at least one full-time online school operating statewide.) But mostly online learning occurs side-by-side with face-to-face learning. (The report estimates that about 50% of all districts are operating or planning online and blended learning programs.)

A report profiling 40 schools that have blended-learning programs has found six basic models of blending learning:

  • Face-to-face-driver: face-to-face teachers deliver most of the curricula. The physical teacher deploys online learning on a case-by-case basis to supplement or remediate, often in the back of the classroom or in a technology lab.
  • Rotation: within a given course, students rotate on a fixed schedule between learning online in a one-to-one, self-paced environment and sitting in a classroom with a traditional face-to-face teacher. The face-to-face teacher usually oversees the online work
  • Flex: uses an online platform to deliver most of the curricula. Teachers provide on-site support on a flexible and adaptive as-needed basis through in-person tutoring sessions and small group sessions.
  • Online-lab: relies on an online platform to deliver the entire course but in a lab environment. Usually these programs provide online teachers. Paraprofessionals supervise, but offer little content expertise.
  • Self-blend: encompassing any time students choose to take one or more courses online to supplement their traditional school’s catalog. The online learning is always remote.

As we can see (and as was also discussed in the Keeping Pace report), online learning is not about making the teacher redundant! No surprise when you consider that a major aspect of online learning (and its attraction for many students) is that it personalizes learning.

This is also echoed at university level. A spokesman for the Pearson Foundation, discussing a survey of over 1,200 college students, said: "There seems to be this belief among students that tablets are going to fundamentally change the way they learn and the way they access what they are learning. Students see these devices as a way to personalize learning." Students don't see tablets as means of accessing digital textbooks as much as a means to access e-mail, manage assignments and schedules, and read non-textbook materials such as study aids, reports, and articles. (You might also like to read about one university's experience introducing iPad's into the classroom)).

In the same way, a study involving students in China and Hong Kong found that Facebook was being used to let them connect with faculty and other students, provide comments to peers/share knowledge, share feelings with peers, join Groups established for subjects, share course schedules and project management calendars, and (via educational applications) organize learning activities.

So one aspect of online learning is management and collaboration.

Of course online learning is not only about personalizing learning. It's also about broadening access to quality educational resources. The open course movement is perhaps more advanced at university level (exemplified by MIT's OpenCourseWare (updated: now Open Education Global), Yale's Open Courses, the Open University's Learning Space), but in Iowa, schools will soon have access to wide variety of digital materials from a central repository using Pearson Education's Equella. Certainly the internet is rife with educational materials aimed at K-12, but there are great benefits from the more formal structure of such a repository.

But this wonderful cornucopia is also the biggest problem. So many resources. And so many structures, programs, digital tools. It takes a lot of time and effort to master each one, and who wants to put in that effort unless they’re sure it’s really important and going to last?

There's no good answer to that, I'm afraid. We are living in a time of transition, and this is the price of that.

But we can try and develop our own 'rules of engagement'. Something to filter out the deluge of new tools and new systems and new resources.

When doing so, we need to consider the two principal, and different, issues involved in this revolution in information delivery systems, which should be kept quite distinct when thinking about them (however muddled together they will be in application). One concerns their use in learning — do textbooks need 'bells and whistles' to be more effective means of learning? what is the best way to frame this information so that students (at their grade level) can understand and remember it? This is the how question. For this we need to work out the different strategies that each delivery system needs to be an effective learning tool, and the different contexts in which each one is effective.

The other issue concerns the world for which the education system is supposedly training students. How is information delivered today? How do people work with information? This is the what question; the issue of content — though not in the 'core knowledge' sense.

Although, part of the issue does concern this question of core content. Because the fact is, however we may pine for the days when we all knew the same things, read the same books, could recite the same poems (no, we never really had those days; we just had smaller groups), there is too much information in the world for that to be possible. And society needs the diversity of many people knowing different things, because there's too much for us all to know the same thing. So what we need from our education system — and I know it's a truism but there you go, doesn't make it less true — is for our students to learn how to learn. Which means they need to know the best strategies for learning from the various information delivery systems they're going to be trying to learn from.

And there's something else, that stems from this point that there's too much for us all to know the same thing. We have this emphasis on doing well as an individual — individuals graduate, become famous, get Nobel Prizes, get remembered in the history books. But science and scholarship, and politics and community development, have always benefited from the stimulation of different minds. The complexity of the world today means that we need that more than ever. The complexity of science today means that most discoveries are the results of a team rather than a single person. Even in mathematics, the archetypal home of the solitary genius.

For example, the Polymath Project began with one mathematical genius who decided to take one of the complex mathematical problems he had struggled to solve to his blog. He threw it out there. And readers threw ideas back. Since then, several papers have been published in journals under the collective name DHJ Polymath.

An example of the open science movement (see the Open Knowledge Foundation and the Open Science Summit), raising the question — is the ‘traditional’ way of doing science really the best way? Let’s bear in mind that the ‘traditional’ way is not in fact all that traditional. It’s a product of its times (and rather recent times at that). We shouldn’t confuse the process of scientific thinking with the institutionalization of science. Proponents of Open Science argue that the advent of the internet can break right through the inertia of the institutions, can allow collaboration and the processing of huge data-sets in ways that are far quicker and more efficient.

This is the world we need to educate for. Educate ourselves and our children. And the heart of it is collaboration. Which is one of the reasons we shouldn't be keeping social media out of the classroom. We just have to use it in the right way.

I began this series with Denmark allowing internet access during exams. So let's finish by returning to this issue.

As with the wider question of education, we need to ask ourselves what testing is for. First of all there's the point that, like note-taking, testing has an obvious purpose and a less obvious one. The obvious one is that it provides a measurement of how well a student knows something (we’ll get to the squirrelly ‘knows’ in a minute); the less obvious is that testing helps students learn. (For note-taking, the obvious purpose is that it provides a record; the less obvious is the same as for testing: it helps you learn.) Many tests may be (or perhaps should be) primarily for learning.

Final exams, on the other hand, are usually solely about assessment. But then we must ask, assessment of what? What do we mean by 'know'? There are topics within subjects which are 'core' — crucial details and understandings without which the subject cannot be understood — cell division in biology; atomic structure in chemistry. But there are many other details that you don't need to have in your head — but you do need to 'know' them enough so that you can find them readily, and fit them into their place readily.

Anyone who can write well and develop an argument in depth on a specialist topic in a three-hour exam period from the internet deserves to pass (I'm assuming, of course, that there are adequate guards against plagiarism!). As with course-work, access to the internet simply raises the standard.

 

These posts have all been rather a grab-bag. This is such a wide topic, with so many issues and everything is such a state of flux. To write coherently on this would require a book. Here I have simply tried to raise some issues, and point to a random diversity of articles and tools that might be of interest. Do add any others (issues, articles, tools) in the comments.

The changing nature of literacy. Part 3: Computers

This post is the third part in a four-part series on how education delivery is changing, and the set of literacies required in today’s world. Part 1 looked at the changing world of textbooks; Part 2 looked at direct instruction/lecturing. This post looks at computer learning.

The use of computers in schools and for children at home is another of those issues that has generated a lot of controversy. But like e-readers, they’re not going back in the box. Indeed, there’s apparently been a surge of iPads into preschool and kindergarten classrooms. There are clear dangers with this — and equally clear potential benefits. As always, it all depends how you do it.

But the types of guidance and restrictions needed are different at different ages. Kindergarten is different from elementary is different from middle grade is different from high school, although media reports (and even researchers) rarely emphasize this.

Media reports last year cited two research studies as evidence that home computers have a negative effect on student achievement, particularly for students from low-income households. One involved 5th to 8th students in North Carolina ; the other Romanian students aged 7 to 22.

The Romanian study concerned low-income families who won government vouchers for the purchase of a personal computer. The study found that, although there was an increase in computer skills and fluency and even an apparent increase in general cognitive ability, academic performance (in math, English, and Romanian) was negatively affected. Use of the computers was mostly focused on games, at the expense of doing homework and reading for pleasure (and watching TV).

Interestingly, children with parents who imposed rules on computer use were significantly less skilled and fluent on the computer, but no better on homework or academic achievement. On the other hand, those who had parents who imposed rules on homework retained the benefits in terms of computer skills, and the negative impact on academic achievement was significantly reduced.

Additionally, there was some evidence that younger children showed the biggest gains in general cognitive ability.

Similarly, the North Carolina study (pdf) found that students who gained access to a home computer between 5th and 8th grade tended to show a persistent decline in reading and math test scores. But these results are very specific and shouldn’t be generalized. Those who already had computers prior to the 5th grade scored significantly above average, and showed improvement over time.

An Italian study also found positive benefits of computer ownership - PISA achievement significantly correlated with 15-year-olds' use of computers at home as an educational tool. However, there seemed to be an optimal level, with the effect becoming smaller the more often they used the computer and even becoming negative if they used school computers almost every day.

The North Carolina and Romanian studies indicate that the problem appears to be when computer use knocks out more beneficial activities such as doing homework and reading for pleasure. It's unsurprising that this might be more likely to occur among children and adolescents who gain ready access to a computer after many years of "deprivation".

In Britain the e-Learning Foundation has recently come out claiming that over a million children will perform significantly worse on exams (an average grade lower) because they don’t have internet access at home. This idea is based on research showing that students who use revision materials on the internet to help them revise have an advantage over those students who don’t have access to such materials. Surely no surprise there! And no contradiction to the previous research. There is undoubtedly a lot of very good educational material on the internet, and even if you have a good teacher, getting a different take on things can help you understand more fully. If you have a poor teacher, this is even more true!

So it all comes down to how computers are being used (and what their use is knocking out, for there is only so much time in the day). Bearing on this point, two programs in the U.S. have with some apparent success introduced computers into disadvantaged homes in such a way that they support a more effective home-learning environment and thus improve academic achievement.

There’s also an argument that laptops have shown little benefit in general because the schools in which they’re used have, by and large, good teachers and good students. But the true value of laptops is for those without access to good teachers. For ten years, computers have been placed into brick walls in public places in hundreds of villages and slums in India, Cambodia and Africa, with apparently very successful results.

An extension of the project has involved British grandparents, many of them retired teachers, volunteering their time to talk, using Skype, to children in the slums and villages of India. From this has developed the model of a Self-organized learning environment (Sole), where children work in self-organized groups of four or five, exploring ideas using computers, the exploration triggered (but not constrained) by questions set by teachers.

I must admit, while I applaud this sort of thing, I have to shake my head at the surprise that this sort of activity is effective, and the comment that the students “maintain their own order”. My children had a Montessori education in their early years — in Montessori schools children habitually “self-organize” and teach themselves (with of course the teachers’ guidance, and the use of the resources provided).

But of course, it helps to have the right resources. Five years gathering data from math-tutoring programs has revealed how 10th and 11th grade students use a help button, which offers progressively more in-depth hints and eventually gives the answer to the question. Basically, most students (70-75%) strenuously resist seeking help, even after several errors. When they do eventually give in and ask for a hint, they do so only because they have given up trying to solve the problem and are aiming to cheat — 82% of those using the hint tool didn’t stop to read it, just clicked through all the hints to get to the answer.

Most recently, then, the researchers changed a geometry tutoring program so that the help tool would encourage students to reflect on their problem-solving strategies — for example, by opening a help window if a student seems to be guessing, or doesn’t seem to reading the hints. In pilot studies, the new help tutor significantly improved students’ help-seeking behavior.

But perhaps these children wouldn’t so misunderstand the use of the help button if they’d been taught in a learning environment that encouraged peer-tutoring. As any teacher knows, the best way to learn something is to teach it!

Teachable Agents software allows students to customize a virtual agent and teach it mathematics or science concepts. The agent questions, misunderstands, and otherwise learns realistically. Pilot studies of these programs have included kindergarten through to college.

Additionally, the virtual agent always explains how it came to an answer, and this seems to transfer to the student-teachers, helping them learn how to reason.

But I'd like to note (because it sounds a wonderful program) that you don’t need fancy software to harness the power of peer-tutoring. The Learning Community Project (English translation) operates in nearly 600 rural schools in Mexico and is planned to go into nearly 7000 rural and urban schools. In this model, students choose a learning project and explore it, guided by adult tutors. They then formally present the results of their inquiry to fellow students, tutors, and parents. When they have developed mastery in an area, they tutor other students who are exploring that area. The learning of students and the training of tutors builds a fund of common knowledge that is available in the community of neighboring schools.

But anyway, the message seems clear, if rather obvious: computers and the internet can be a very positive tool for learning, but, as with books and lectures, there are right ways and wrong ways of implementing these delivery systems.

In the next and lash post in this series, I'll discuss what literacy means in today's world, and the new  learning models that are being developed.

[Update: Note that some links have been removed as the linked article is no longer available]

The changing nature of literacy. Part 2: Lecturing

This post is the second part in a four-part series on how education delivery is changing, and the set of literacies required in today’s world. Part 1 looked at the changing world of textbooks. This post looks at the oral equivalent of textbooks: direct instruction or lecturing.

There’s been some recent agitation in education circles about an article by Paul E. Peterson claiming that direct instruction is more effective than the ‘hands-on’ instruction that's so popular nowadays. His claim is based on a recent study that found that increased time on lecture-style teaching versus problem-solving activities improved student test scores results (for math and science, for 8th grade students). Above-average students appeared to benefit more than below-average, although the difference was not statistically significant.

On the other hand, a college study found that a large first-year physics class taught in a traditional lecture style by an experienced and highly rated professor performed more poorly on several measures than another class taught only by engaging in small-group problem-solving tasks. Attendance improved by 20% in the experimental class, and engagement (measured by observers and "clicker" responses) nearly doubled. Though the experimental class didn’t cover as much material as the traditional class, dramatically more students showed up for the unit test, and they scored significantly better (average score of 74% vs 41%).

It must be noted, however, that this experiment only ran for a week (3 hours instruction).

But the researchers of the middle-grade study did not conclude that lecturing was superior, or that their results applied to college students. Their very reasonable conclusion was that “Newer teaching methods might be beneficial for student achievement if implemented in the proper way, but our findings imply that simply inducing teachers to shift time in class from lecture-style presentations to problem solving without ensuring effective implementation is unlikely to raise overall student achievement in math and science. On the contrary, our results indicate that there might even be an adverse impact on student learning.”

The whole issue reminds me of the phonics debate. I don’t know what it is about education that gets people so polarized, when it seems so obvious that there are no simple answers. What makes an effective strategy is not simply the strategy itself, but how it is carried out, who is using it, and when they are using it.

In this case, the quality and timing of these ‘problem-solving activities’ is perhaps central. The rule of thumb that twice as much time should be allocated to problem-solving activities as to direct instruction is perhaps being applied with too little understanding about the role and usefulness of specific activities.

But it’s obvious that there are going to be strong developmental differences. The ‘best’ means of teaching 18-year-olds is not going to suit 5-year-olds, and vice versa. So we can’t conclude anything about middle school by looking at college studies, or college by looking at middle school studies.

So, bearing in mind that a discussion of college lecturing has little to do with direct instruction in schools, let’s look a little further into college lectures, given that this is the predominant method of instruction at this level.

First of all, we must ask what students are doing during lectures. Given many teachers’ distress at their students’ activity on phones and laptops during class, it’s worth noting the findings of two recent studies that spied on college students in class rather than relying on self-reporting.

The first study involved 45 students who allowed monitoring software to be installed. Distinguishing between “productive” applications (Microsoft Office and course-related websites) from “distractive” ones (e-mail, instant messaging, and non-course-related websites), the researchers found that non-course-related software was active about 42% of the time. However only one type of these distractive applications was significantly correlated with poorer academic performance: instant messaging. This despite the fact that IM windows had the shortest average duration. (It’s also worth noting that instant-messaging use was massively under-estimated by students (by 40% vs, for example, 7% for email use)).

It seems likely that this has to do with switching costs. For those who read my recent blog post on working memory, you might recall that switching focus from one item to another has high costs. Moreover, it seems that the more frequently (and thus briefly) you switch focus, the higher the cost.

The other study used human observers rather than spyware, with obvious drawbacks. But the finding I found interesting was the dramatic jump between first-year and second- and third-year law students: more than half of the latter who came to class with laptops used them for non-class purposes more than half the time, compared to 4% of first-year students. While the teacher took this as a signal to ban laptops in his upper-year courses, perhaps he should have rather taken it as evidence that his students had become more discerning about what was relevant. We need to know how this laptop use mapped against performance before drawing conclusions.

But not all teachers are reflexively against distractive technology. The banning of cellphones from classrooms, and general distress about social media, is starting to be offset by teachers setting up “backchannels” in their classes. These digital channels are said to encourage shy and overwhelmed students to ask questions and make comments during class.

Of course, most teachers are still anti, and a lot of that may be driven by a fear of losing control of the class, or being unable to keep up with the extra stream of information (particularly in the face of the students’ facility in multitasking).

And maybe some teachers are so antagonistic toward distractive technology because they feel it’s insulting. It implies they’re boring.

Well, unfortunately, many students do find a lot of their lectures boring. A 2009 study of student boredom suggested that almost 60% of students find at least half their lectures boring, of which half found most or all of their lectures boring.

But I don’t think the answer to this is to remove their toys. Do you think they’ll listen if they don’t have anything else to do? The study found bored students daydream (75% of students), doodle (66%), chat to friends (50%), send texts (45%), and pass notes to friends (38%).

It’s not that teachers have to entertain them! Granted it’s easier to hold students’ attention if you’re doing explosive chemistry experiments, but students really aren’t so shallow that you have to provide spectacles. They are there (at college level at least) because they want to learn. But you do have to present the information in a way that facilitates learning.

One of the main contributors to student boredom is apparently the (bad) use of PowerPoint.

But even practical sessions, supposedly more engaging than lectures, appear to bore students. Lab work and computer sessions achieved the highest boredom ratings in the study.

Because boredom is not as simple a concept as it might appear. Humans are designed for learning. This is our strength. Other animals may be fast, may be strong, may have sharp claws or teeth, or venom. Humans are smart, and curious, and we know that knowledge is power. Humans like to learn. So what goes wrong during the education process?

Well, one of the problems is that there’s a cognitive “sweet spot”. If you make something too difficult, most people will be put off. If you make something too easy, they won’t bother. The sweet spot of learning is that point where the amount of cognitive effort is not too little and not too great — of course you have to find that point, and a complicating factor is that this varies with individuals.

One area where creators have had a lot of success in finding that sweet spot (because they try very hard) is video games.

How can we harness the power that video games seem to have? A book called "Reality Is Broken: Why Games Make Us Better and How They Can Change the World" points out that creating Wikipedia has so far taken about 100 million hours of work, while people spend twice that many hours playing World of Warcraft in a single week.

Some of the features of good games that researchers believe are important are: instant feedback, small rewards for small progress, occasional unexpected rewards, continual encouragement from the computer and other players, and a final sense of triumph. Most of this is no news to educationalists, but there’s a quote I really love: “One of the most profound transformations we can learn from games is how to turn the sense that someone has ‘failed’ into the sense that they ‘haven’t succeeded yet.” (Tom Chatfield, British journalist and author)

That quote is a guide to how to find that sweet spot.

Providing motivation, of course, as we all know, is crucial. Where’s the relevance? Traditionally, it may have been enough to simply tell students that they needed to know something, and they’d believe you. But it’s not just that students have become cynical and less respectful (!) — the fact is, they have good reason to question whether traditional content and traditional strategies have any relevance to what they need to know.

Here’s a lovely example of the importance of motivation and relevance. In India Bollywood musicals are madly popular. For nine years, these movies have had karaoke-style subtitles. The first state to broadcast the subtitles was Gujarat. Because viewers were so keen to sing along, they paid attention to these captions, often copying them out to learn. As a consequence, literacy has improved. Newspaper reading in one Gujarat village has gone up by more than 50% in the last decade; women, who can now read bus schedules themselves, have become more mobile, and more children are opting to stay in school. Viewers in India have shown reading improvement after watching just eight hours of subtitled programming over six months.

This has apparently worked in more literate nations as well. Finland (and we all know how well it scores in education rankings) attributes much of its educational success to captions. For several decades now, Finland has chosen to subtitle its foreign language television programs (in Finnish) instead of dubbing over them. And Finnish high school students read better than students from European countries that dub their TV programs, and are more proficient at English.

But songs, it seems, are better for this than dialog.

Of course this strategy is only useful at a certain stage — when learners have basic skills, but are having trouble moving beyond.

This is the point, isn’t it? Different situations (a term encompassing the learners, their prior knowledge, and their goals, as well as the content and its context) require different strategies. For example, I recently read a discussion on Education Week prompted by a teacher being forced by his/her institution to use PowerPoint in a class for ESL students to improve their English speaking skills.

Powerpoint slides can be very effective, but far too many aren’t. Similarly, lab sessions can be true learning experiences, or simply “paint-by-numbers” events for which the result is known. Lectures can be a complete waste of time, or true learning experiences.

Consider marathon oral readings of famous texts. A recent article on Inside Higher Ed said that such “events help convey messages, engage students, and foster community on their campuses in ways that reading alone cannot do”. And there was a nice quote from a student: "Until you hear another student read it in his or her own voice, you don't really understand the vast possibilities for interpretation."

What’s the difference between this and a lecture? Well, in one sense none. Both depend on delivery and presentation. I’ve been to some very engaging and inspirational lectures, and some readings can be flat and uninspiring. But the critical difference is that one is literature (a story) and the other is expositional. To make instructional text engaging, you have to work a lot harder. And this is true regardless of the mode of delivery — lectures and textbooks are the oral and written variants of linear exposition.

Is it fair to dismiss a strategy just because some people perform it badly? Is it smart to require a strategy because in some circumstances it is better than another?

We need a better understanding of the situations in which different strategies are effective, and the different variables that govern when they are effective. And we need more flexibility in delivery.

Which brings us to computer learning, which I’ll discuss in the next post.

[Update: Please note that some links have been removed as the articles on other sites are no longer available]

The changing nature of literacy. Part 1: Textbooks

As we all know, we are living in a time of great changes in education and (in its broadest sense) information technology. In order to swim in these new seas, we and our children need to master new forms of literacy. In this and the next three posts, I want to explore some of the concepts, applications, and experiments that bear on this.

Apparently a Danish university is going to allow students access to the internet during exams. As you can imagine, this step arouses a certain amount of excitement from observers on both sides of the argument. But really it comes down, as always, to goals. What are students supposed to be demonstrating? Their knowledge of facts? Their understanding of principles? Their capacity to draw inferences, make connections, apply them to real-world problems?

I’m not second-guessing the answers here. It seems obvious to me that different topics and situations will have different answers. There shouldn’t be a single answer. But it’s a reminder that testing, like learning, needs to be flexible. And education could do with a lot more clear articulation of its goals.

For example, I came across an intriguing new web app called Topicmarks, that enables you to upload a text and receive an automated précis in return. On the one hand, this appalls me. How will students learn how to gather the information they need from a text if they use such tools? How can a summary constructed automatically possibly elicit the specific information you’re interested in? (Updated: this no longer appears to exist, but you can see an example at the end of this post, where I’ve appended the summary produced of a Scientific American article.)

Even if we assume it actually does a good job, it is worrying. And yet … There is too much information in the world for anyone to keep up with — even in their own discipline. There’s a reason for the spate in recent years of articles and books on how the invention of printing brought about a technological revolution — a need for new tools, such as indices, the idea of using the alphabet to order them, meaningful titles and headings, tables of contents. Because the flood of information, as we all know, requires new tools. This one (which will assuredly get better, as translation software apparently has) may have its place. Before we get all excited about the terrible consequences of automated summaries, and internet-access during exams, we should think about the world as it is today, and not the world for which the education system was designed.

The world for which the education system was designed was a simpler one, in terms of information. You gained information from people you knew, or from a book. Literacy was about being able to access the information in books.

But that’s no longer the case. Now we have the internet. We have hyperlinked texts and powerpoint slides, multimedia and social media. Literacy is no longer simply about reading words in a linear, unchanging text. Literacy is about being able to access information from all these new sources (and the ones that will be here tomorrow!).

Even our books are changing.

The simplest ‘modernized’ variant of the traditional textbook is the traditional textbook on a digital device. But e-readers are not well designed for textbook reading, which is quite different from novel reading.

A study involving 39 first-year graduate students in Computer Science & Engineering (7 women and 32 men; aged 21-53) who participated in a pilot study of the Kindle DX, found that, seven months into the study, less than 40% of the students were regularly doing their academic reading on the e-reader. Apart from the obvious – students wanted better support for taking notes, checking references and viewing figures – the really interesting thing was the insight it gave into how students use academic texts.

In particular, students constantly switch between reading techniques, such as skimming illustrations or references before reading the text. They also use physical cues to help them remember where certain information was, or even to remember the information itself (this is something classic and medieval scholars relied on heavily; I have spoken of this in the context of the art of memory). Both of these are problematic with the Kindle.

Consequently, in a survey of 655 college students, 75% said that, if the choice was entirely theirs, they would select a print textbook. (The article also lists some of the digital textbook providers, and some open-access educational resources, if you’re interested).

But e-readers are the future. (Don’t panic! This is not an either/or situation. There will still be a place for physical books — but that place is likely to become more selective.) The survey found a surge in the number of students who have a dedicated e-reader (39% vs 19% just five months earlier). Another, more general survey of over 1,500 end users in the US, the UK, Japan, India, Italy, and China, found that the amount of time spent reading digital texts now nearly equals time spent reading printed materials.

Nearly everyone (94%) who used tablets (such as iPads) either preferred reading digital texts (52%) or found them as readable as print (42%). In contrast, 47% of laptop users found digital text harder to read than print. While 40% of respondents had no experience of e-readers, this varied markedly by country. Surprisingly, the country with the highest use of e-readers was China. Rates in the US and the UK were comparable (57% and 56% had no experience of e-readers).

The age-group unhappiest about reading on screen were 40- to 54-year-olds. Falling into that age-group myself, I speculate that this has something to do with the way our eyesight is beginning to fail! We’re not at the point of needing large font (or at least of accepting that we need it), but we find increasing difficulty in comfortably reading in conditions that are less than optimal.

So, we have a mismatch between e-readers and the way textbooks are read. There’s also the issue of the ‘textbook model’. Many think it’s broken. Because of their cost, because some subjects move so fast (and publishing moves so slowly) that they’re out of date before they come out, even because of their weight. And then there’s the question of whether students actually learn from textbooks, and how relevant they are to student learning today.

This is reflected in various attempts to revolutionize the textbook, from providing interactive animations (see, for example, a new intro biology textbook) to the ‘learning space’ being developed (again in biology — is this just happenstance, or is biology leading the way in this?). Here information is organized into interconnected learning nodes that contain all of the baseline information a textbook would include, plus supplemental material and self-assessments. So there are videos, embedded quizzes, information flow between students and the teacher.

One aspect of this I find particularly interesting: both students and teachers can write new nodes. So for example, in a pilot of this biology program, 19 students wrote 130 new nodes in one semester — clearly demonstrating their engagement in the course, and hopefully their much greater learning.

Another attempt at providing more user-control is that of “flexbooks”. Flexbooks for K-12 classes enables teachers to easily select specific chapters from the content on the website, and put them together into a digital textbook in three formats (pdf, openreader, and html — this format is interactive, with animations and videos). You can also change the content itself.

Multimedia is of course all the rage. But, as I discuss in my book on effective note-taking, it’s not enough to simply provide illustrations or animations — it has to be done in the right way. Not only that, but the reader needs to know how to use them. Navigating a ‘learning space’ or multimedia environment is not the same as reading a book, and it’s not something our book-literacy skills directly transfer to.

And it’s not only a matter of textbooks. Textbooks have their own particular rules, but any expositional text has the potential to be recreated as a multimedia experience.

Here, for example, is a “video-book”: Learning From YouTube , is "large-scale online writing that depends upon video, text, design, and architecture for its meaning making." The author, Alexandra Juhasz, talks about how “common terms of scholarly writing and publishing must be reworked, modified, or scare-quoted to most effectively describe and traverse the "limits of scholarship" of the digital sphere.”

She talks about how scholars should ask which book medium is best suited for their study (rather than simply assuming it must be a traditional book). Reading and writing practices are changing on the internet — rather than deploring or embracing the new habits, we should ask ourselves which practices are most appropriate for the specific material.

She also talks about the need to educate readers in new ways of doing things. We don’t want to simply equate internet use with surfing, with hyperactive jumping and skimming. That has a place, but the internet is also home to material (like her video-book) that requires lengthy and deep study.

And of course there’s an obligation on the net to actually provide the deeper information (at least in the form of links) that in print books we can fob off with references and recommended reading lists.

Note this point: scholars should ask which book medium is best suited for their study. It applies to textbooks too. Books are not being transformed into something different; they are blossoming. There is still room for straight texts. Nor should it — it most certainly should not — be assumed that throwing a bunch of animated videos into the mix is enough to turn a book into an exciting new learning experience. As with books, some are going to be created that are effectively presented, and some are not.

I’ve said we should think of this as a blossoming of the book concept. But are these new, blossoming variants, still books? Where are we going to draw the lines? Video-books and learning spaces are more like courses than books. Indeed, the well-known textbook publisher Pearson has recently partnered with the lecture capture provider Panopto — another sign of the movement from traditional textbooks to cloud-based “educational ecosystems”.

Perhaps it’s premature to try and draw any lines. Let’s consider the oral equivalent of textbooks: lecturing, or as it’s known at K-12 level, direct instruction. My post tomorrow will look at that.

 


Topicmarks summary (Scientific American article)

In humans, brain size correlates, albeit somewhat weakly, with intelligence, at least when researchers control for a person's sex (male brains are bigger) and age (older brains are smaller). Many modern studies have linked a larger brain, as measured by magnetic resonance imaging, to higher intellect, with total brain volume accounting for about 16 percent of the variance in IQ. But, as Einstein's brain illustrates, the size of some brain areas may matter for intelligence much more than that of others does. Studying the brains of 47 adults, Haier's team found an association between the amount of gray matter (tissue containing the cell bodies of neurons) and higher IQ in 10 discrete regions, including three in the frontal lobe and two in the parietal lobe just behind it. In its survey of 146 children ages five to 18 with a range of IQs, the Cincinnati group discovered a strong connection between IQ and gray matter volume in the cingulate but not in any other brain structure the researchers examined.

In a 2006 study child psychiatrist Philip Shaw of the National Institute of Mental Health and his colleagues scanned the brains of 307 children of varying intelligence multiple times to determine the thickness of their cerebral cortex, the brain's exterior part. Over the years brain scientists have garnered evidence supporting the idea that high intelligence stems from faster information processing in the brain. Underlying such speed, some psychologists argue, is unusually efficient neural circuitry in the brains of gifted individuals. The researchers used electroencephalography (EEG), a technique that detects electrical brain activity at precise time points using an array of electrodes affixed to the scalp, to monitor the brains of 27 individuals while they took two reasoning tests, one of them given before test-related training and the other after it. The results suggest that gifted kids' brains use relatively little energy while idle and in this respect resemble more developmentally advanced human brains.

Some researchers speculate that greater energy efficiency in the brains of gifted individuals could arise from increased gray matter, which might provide more resources for data processing, lessening the strain on the brain. In a 2003 trial psychologist Jeremy Gray, then at Washington University in St. Louis, and his colleagues scanned the brains of 48 individuals using functional MRI, which detects neural activity by tracking the flow of oxygenated blood in brain tissue, while the subjects completed hard tasks that taxed working memory. The researchers saw higher levels of activity in prefrontal and parietal brain regions in the participants who had received high scores on an intelligence test, as compared with low scorers. Lee and his co-workers measured brain activity in 18 gifted adolescents and 18 less intelligent young people while they performed difficult reasoning tasks. These tasks, once again, excited activity in areas of the frontal and parietal lobes, including the anterior cingulate, and this neural commotion was significantly more intense in the gifted individuals' brains.

Why your knowledge of normal aging memory matters

I’ve discussed on a number of occasions the effects that stereotypes can have on our cognitive performance. Women, when subtly reminded that females are supposedly worse at math, do more poorly on math tests; African-Americans, when subtly reminded of racial stereotypes, perform more poorly on academic tests. And beliefs about the effect of aging similarly affect memory and cognition in older adults.

Your beliefs matter. In the same way that those who believe that intelligence is fixed tend to disengage when something is challenging, while those who believe that intelligence is malleable keep working, believing that more time and effort will yield better results (see Fluency heuristic is not everyone’s rule and Regulating your study time and effort for more on this), older adults who believe that declining faculties are an inevitable consequence of aging are less inclined to make efforts to counter any decline.

Moreover, if you believe that your memory will get progressively and noticeably worse as you get older, then you will tend to pay more attention to, and give more weight to, your memory failures. This will reinforce your beliefs, and so on, feeding back on itself. Bear in mind that we all, at every age, suffer memory failures! Forgetting things is not in itself a sign of age-related decline.

It’s important, therefore, that people have a realistic idea of what to expect in ‘normal’ aging. In the course of writing a short book on this topic (it will be out, I hope, early in the new year), I came across the Knowledge of Memory Aging Questionnaire (KMAQ). Research using this questionnaire has revealed the interesting finding that people know more about pathological memory aging than they do about normal memory aging.

You may find it interesting to know some of the questions, and how likely people are to get them right. So, let's look at one of these studies, involving 150 people, divided evenly into three age-groups (40-59; 60-79; 80+).

The oldest-old scored significantly more poorly than the other two groups, although the differences weren’t great (65% correct vs 70% and 69%). There was no overall difference between genders, but males were significantly more likely to answer “Don’t know” to questions about pathological memory.

But if we focus only on the subset of four questions that relate to stereotypes about normal aging in memory, there is much greater difference between the age groups (78% correct, 69%, 52%, for middle age, young-old, and oldest-old, respectively). These are the four questions (the answers are all “false”):

  • Regardless of how memory is tested, younger adults will remember far more material than older adults.
  • If an older adult is unable to recall a specific fact (e.g., remembering a person’s name), then providing a cue to prompt or jog the memory is unlikely to help.
  • When older people are trying to memorize new information, the way they study it does not affect how much they will remember later.
  • Memory training programs are not helpful for older persons, because the memory problems that occur in old age cannot be improved by educational methods.

Only one of these questions was reliably answered correctly, and that only by the middle-age adults (If an older adult is unable to recall a specific fact, then providing a cue to prompt or jog the memory is unlikely to help.)

Looking at the individual questions, it’s interesting to see that the different age-groups show different patterns of knowledge. Middle-age adults were most likely to answer the following questions correctly (between 45 and 42 of the 50 answered correctly):

  • [Q18] Signs and symptoms of Alzheimer’s Disease show up gradually and become more noticeable to family members and close friends over time. (true)
  • [Q17] Memory for how to do well-learned things, such as reading a map or riding a bike, does not change very much, if at all, in later adulthood. (true)
  • [Q1] “A picture is worth a thousand words” in that it is easier for both younger and older people to remember pictures than to remember words. (true)
  • If an older adult is unable to recall a specific fact (e.g., remembering a person’s name), then providing a cue to prompt or jog the memory is unlikely to help. (false)

Young-old adults also scored highly on Q17 and Q1, but their other top-scorers were:

  • [Q21] If an older person has gone into another room and cannot remember what he or she had intended to do there, going back to the place where the thought first come to mind will often help one recall what he or she had intended to do. (true)
  • Confusion and memory lapses in older people can sometimes be due to physical conditions that doctors can treat so that these symptoms go away over time. (true)

The oldest-old agreed that Q21 and Q18 were easy ones (indeed, 48 and 47 got these questions right), but after that, their next top-scorer was:

  • Lifelong alcoholism may result in severe memory problems in old age. (true)

Although average education levels were similar for the three age-groups, there was greater variability within the oldest-old — 9 didn’t finish high school, but 20 had tertiary degrees. In comparison, only one middle-aged and one young-old adult didn’t finish high school. The finding that the oldest-old were more likely to answer according to stereotypes of aging memory may therefore reflect, at least in part, the lower education of some individuals.

But let’s go back to my earlier comment that those who believe poorer memory is inevitable with age give more weight to their failures while being less inclined to deal with them. This study did indeed find that changes in memory test performance over five years were correlated with subjective memory complaints, but not with use of external aids. That is, people who were forgetting more, and noticing that they were forgetting more, did not engage in greater use of strategies that would help them remember.

Something to think about!

My Memory Journal

References

Hawley, K. S., Cherry K. E., Su J. L., Chiu Y. - W., & Jazwinski M. S. (2006). Knowledge of memory aging in adulthood. International Journal of Aging & Human Development. 63(4), 317 - 334.

Aging successfully

In a recent news report, I talked about a study of older adults that found that their sense of control over their lives fluctuates significantly over the course of a day, and that this impacts on their cognitive abilities, including reasoning and memory. ‘Sense of control’ — a person’s feeling that they are (or are not) in control of their life — is an attribute that includes perceived competence, as well as locus of control, and in general it tends to decline in older adults. But obviously it is an attribute that, across the board, varies dramatically between individuals.

In older adults, a stronger sense of control is associated with more successful aging, and among people in general, with better cognitive performance. This isn’t surprising, as it is entirely consistent with related associations we have found: between strategy use and cognitive performance; between the belief that intelligence is malleable rather than fixed and cognitive performance.

My point here, however, is the connection between these findings and other aspects of successful aging that impact mental performance.

For example, I have spoken before about the association between age-related hearing loss and cognitive impairment (see this recent New York Times blog post for a very nice report on this), and poor vision and cognitive impairment.

Similarly, high blood pressure, diabetes, and depression have all been implicated in age-related cognitive decline and dementia. (For more on these, see the topic collection on diabetes, the topic collection on depression, and the new topic collection on hypertension.)

Depression, and poorer hearing and vision, are aspects of health and well-being that many seniors ignore, regarding them as no more than can be expected in old age. But their occurrence, however inevitable that may be, should not be regarded as untreatable, and seniors and their loved ones (and any with a duty of care) should be aware that by letting them go untreated, the consequences may well be more serious than they imagine.

Hypertension and diabetes, too, are medical problems that often go untreated. These problems often begin in middle age, and again, people are often unaware that their procrastination or denial may have serious implications further down the line. There is growing evidence that the roots of cognitive decline and dementia lie in your lifestyle over your lifetime, and in middle age especially.

Similarly, chronic stress may not only impair your mental performance at the time, but have long-term implications for your mental health in later old age. It is therefore an important problem to recognize and do something about for long-term health as well as present happiness. Scientific American has a self-assessment tool to help you recognize how much stress you are experiencing.

What does all this have to do with the sense of control association? Well, it seems to me that people who feel in control of their lives will be more likely to take action to deal with any of these problems; those who don’t feel in control of their lives will tend not to take such action. Thus giving up their control, and making their beliefs about the perils of aging a self-fulfilling prophecy.

A final note: my talk of treatment should not be taken as advocating a medicalized view of aging. Another aspect of aging and cognition is the widespread use of drugs among older adults. In the U.S., it’s reported that over 40% of those over 65 take five or more medications, and each year about one-third of them experience a serious adverse effect. You can read more about this in this New York Times blog article.

Hypertension, diabetes, depression, and stress are all problems that are amenable to a range of treatments, of which I personally would put drugs last.

But my point here is not to advocate specific treatments! I am a cognitive psychologist, not a medical doctor. All I wish to do in this post is provide a warning and some resources.