Readability - a Catch 22
As we’ve been posting about the requirement to use x or y grade reading level, I’ve been thinking about how these regulations are really unintentionally sidetracking us from focusing on two important parts of reading comprehension:
1) what really makes text/or message hard - often the concepts, not the word or sentence length; and
2) what type of writing will actually prove to be more able to adequately inform people and advance their health literacy
In the field of health literacy word and sentence length have been what we use - through applying readability formulas. But readability formulas were not meant to drive regulation and policy. Way back in 1974, Flesch, a developer of the Flesch-Kincaid, stated that he hoped users “won’t take the formula too seriously and won’t expect from it more than a rough estimate.”
What Flesch and others recognized is that short words are not always easy to understand and ong sentences aren’t always hard to understand. The word “waive” as in “We will waive your premium” counts exactly the same on a Flesch test as “we,” “will,” and “your” (Redish & Seizer, 1985, p. 4). People in readability back in the 70s and 80s understood that reading is much more complex than processing rows of words and sentences. It’s why most textbook publishers don’t use readability scores anymore.
When we’re required to write to meet grade level/ readability criteria we’re caught in a Catch-22. And we often wind up gaming the system - artificially dividing sentences and using sentence fragments (Ancker, 2004; Redish & Seizer, 1985, p. 4). So if we ad the very words or sentences that would make the text really more comprehensible ( understandable) we wind up unhappily increasing the readability score of the material. And the regulators don’t like that, so we write what scores better - often short sentences, without much cohesion, seldom introducing the vocabulary and concepts people need to truly understand, learn and use health information.
Jessica Ancker (2004) demonstrates this with the clever example: “Be prepared to die next month,” scores lower (easier-to-read) than “Call for an appointment next month” because the words in the latter are shorter.
Often the “simplified” text looks simpler, but when tested is not more understandable, and often more difficult to understand has been amply demonstrated (Charrow & Charrow, 1979; Davison et al., 1980; Duffy & Kabance, 1982; McNamara, Kintsch, Butler-Songer, & Kintsch, 1996; Kintsch, 1994). And, the Institute of Medicine (IOM) report, “Health Literacy: A Prescription to End Confusion” calls for researchers and practitioners to move beyond reading level to find newer solutions to low health literacy (Nielsen-Bohlman et al., 2004).
I call this situation The Simplicity Complex.
Anybody see any solutions?
(some references mentioned above)
Ancker, J. (2004). Developing the informed consent form: A review of the readability literature and an experiment. American Medical Writers Association Journal(19), 97-100.
Charrow, R. P., & Charrow, V. R. (1979). Making legal language understandable: A psycholinguistic study of jury instructions. Columbia Law Review, 79, 1306-1374.
Duffy, T. M., & Kabance, P. (1982). Testing a readable writing approach to text revision. Journal of Educational Psychology, 74, 533-548.
McNamara, D. S., Kintsch, E., Butler-Songer, N., & Kintsch, W. (1996). Are good texts always better? Interactions of text coherence, background knowledge, and levels of understanding in learning from text. Cognition and Instruction, 14, 1-43.
Kintsch, W. (1994). Text comprehension, memory, and learning. American Psychologist , 49, 294-303.
Redish, J. C., & Seizer, J. (1985). The place of readability formulas in technical communication. Technical Communication, 32(4), 46-52.