The OK Plateau

In Joshua Foer’s excellent book on the art of memory, Moonwalking with Einstein, he mentions the “OK Plateau” as something that all humans learning anything will encounter. This is the stage you reach once you’ve moved past “beginner” and are able to execute a task with some degree of automation. For example, when you first learn to type, you look for and consciously press the right keys. But at some point you learn where they are and can type without looking (or really thinking about individual keys). Foer pointed out something I’ve always wondered — if we tend to get better at something over time, why doesn’t everyone end up being a 100+ wpm touch-typist?

The “OK Plateau” is reached when you are doing a task “well enough” for your needs, and your brain moves on to focus its conscious effort on something else. So even though you might be typing every day (email, reports, documents, forms), you probably will settle into some particular typing speed that never really improves.



Excellent depiction by imagethink.

This is fine for tasks in which “good enough” is, well, good enough. But there are some things in which you want to become an expert, or at least push your performance to a much higher level. To do that, it seems, you must push yourself back into a conscious awareness of what you are doing and examine and explore where you are making errors or performing suboptimally.

“[Those who excel] develop strategies for consciously keeping out of the autonomous stage while they practice by doing three things: focusing on their technique, staying goal-oriented, and getting constant immediate feedback on their performance.” (Foer)

This means constantly pushing yourself to do more, work faster, tackle harder examples, and so on, and then to learn from your failings or mistakes.

I have been thinking about this in terms of my pilot training. There are significant parts of flying that I can now do with some degree of automation, and it is tempting to declare them “learned” and move my tired brain on to the other big poles. But it is also clear to me that complacency is not something you want to develop in flying – nor in driving – nor anything else that requires a good depth of experience and tuned reflexes. I’ve come across advice in different pilot venues that urge you to continue polishing and refining. How precise can you make your short landing? How precise can you be on airspeed and altitude? If you picked out an emergency landing spot, fly low and actually check it out. Is it as obstacle-free as you thought from higher up?

I expect there is probably a transition you hit once you get your pilot’s license. You go from regular lessons with an instructor (with performance expectations and critiques) to absolute freedom to fly when you want, where you want, with no one watching over your shoulder. At that point, it is up to you to maintain that same level of scrutiny and to critique your own performance. My instructor told me to always have a specific goal when I go out to do solo practice. I’ve encountered the recommendation that, after landing, you give yourself a grade for every flight. What did you do well? What was borderline? What new questions came up that you should research?

Foer describes chess players who learn more from studying old masters’ games (and reasoning through each step) than from playing new games with other players. Studying past games can be more mindful. Pilots can benefit similarly from reading through accident reports to gain knowledge about how things go wrong. AOPA offers a rich array of Accident Case Studies that provide a wealth of scenarios to think through and learn from.

For any hobby or skill, there are similar opportunities to make your practice time more effective at increasing your ability. Instead of playing through your latest violin piece, try doing it 10% faster and see what happens. Try transposing it to a different key on the fly. On your next commute, grade yourself on whether you maintained a specific following distance, how many cars in surrounding lanes you were consciously tracking, how well you optimized your gas mileage, or some other desirable metric.

Employing this approach to everything you do would be exhausting and impossible to maintain. But for those few things that really matter to you, for which the OK Plateau is not good enough, it could be what catapults you to the expert domain. If you’re interested, check out Foer’s short talk summarizing the OK Plateau and his advice for escaping it.

Too old to direct air traffic

I recently learned that there is a *maximum* age at which one can start training to be an air traffic controller. While a minimum age for various efforts is common, specifying a maximum age seemed curious, and especially given that the oldest you can be to start ATC training is 30 years old. So young!

Naturally, I wondered why this limit had been chosen. After some digging, I discovered that it derives from studies done in the 1960s and 1970s such as

Trites and Cobb (1962) conducted a study of ATC trainees and their subsequent job performance (in the first year of work) that showed a marked increase in training failure rates with age, up to age 45:

Trites Fig 7

They do not speculate about reasons for the reduction in performance, concluding that

“Whatever the nature of the casual factors associated with chronological age and underlying the relationships of this study, there is no doubt that the number of potential training failures can be reduced and undesirable controllers eliminated by specifying a maximum age for entry into air traffic controller training. In the best interests of air safety and financial economy, establishment of an upper age limit is recommended.”

The FAA must not have heeded this advice, because nine years later, Cobb was still working to persuade them of the dangers of older ATC trainees. The Cobb et al. (1971) study is of 710 air traffic controllers, aged 21-52, that concluded that “age correlated negatively with 21 of the 22 aptitude measures and with training course grades.” This is a study of a biased sample, however: “because of their highly specialized pre-employment experience, these men were not required to qualify on the CSC ATC Aptitude Screening Test.” It is perhaps unsurprising that they might have lower aptitude measures, since these were not used to screen them as applicants. However, the negative correlation of performance with age is there. In Figure 2 from this paper, black means “failed basic training course”, hashed means “course grades comprising the approximate lower half of pass group”, and white means “course grades comprising the approximate upper half of pass group”:

Cobb Fig 2

The numbers in the right column are the number of subjects in each age group. “Although the subjects over age 34 represented only about 23 per cent of the 710 men involved in the entire study, their failure rate (31.1 per cent) in Academy ATC training was about three times that of the younger trainees.”

Cobb et al. went on to test these subjects on a variety of mental tasks, including simple arithmetic, spatial reasoning, following oral directions, abstract/logical reasoning, and a job-relevant task described as follows: “A highly-speeded test consisting of two parts of thirty items each. In each part, the subject is presented a flight data display for several aircraft and must determine whether certain changes in altitude may be directed without violating a specified time-separation rule.”

Performance on every single test, except arithmetic, was negatively correlated with age.

Maybe this result, or others like it, did the trick. The right of the FAA to establish a maximum age for its air traffic controllers was passed into US Law in 1972. The current version of the law states that

“The Secretary may, with the concurrence of such agent as the President may designate, determine and fix the maximum limit of age within which an original appointment to a position as an air traffic controller may be made.”

Which programming language should you learn first?

This question lies at the heart of all computer science curriculum design efforts, and it resurfaces year after year after year. One reason that it can never be answered conclusively is that the range of options, and the kinds of programming needs that are out there, change over time. Another reason is that it’s a holy war. For some folks, you might as well be asking what their favorite text editor is. For those folks, don’t.

But it’s a question of more general interest, beyond the computer science classroom. Douglas Rushkoff argues that everyone should be programming-literate, for their own survival, and even less extreme views highlight the benefits of computational thinking.

I’m not going to tell you what language to learn first, because I don’t have (and I don’t think there is) a fully general best-possible recommendation.

Lifehacker took a stab at characterizing a few common languages to help newcomers make this decision. Their programming language menu goes like this:

  • C: Trains You to Write Efficient Code
  • Java: One of the Most Practical Languages to Learn
  • Python: Fun and Easy to Learn
  • JavaScript: For Jumping Right in and Building Websites

  • … which isn’t quite how I would have done it. And I’m not sure these characterizations are even useful.

    I’m much more persuaded by this approach which points out that “learn a language” is not a single specific concept. It’s important to ask how *well* you want (or need) to learn the language.

    I was immediately struck by the parallel with learning natural (human) languages. When I’m going to a foreign country for a week-long conference, I learn a smattering of useful/polite phrases to help me get around and not be That American while I’m there. If I were to move to that country, I’d be willing to invest orders of magnitude more effort to be functional in the language. I don’t agonize over which language to learn; I learn the one I’m going to need.

    Likewise, the programming language you want to learn is the one you’ll need to have at your disposal. Work, school, implementation, or other constraints might dictate that to you. And if not — if you’re a hobbyist or just want to learn “programming” with no particular end goal — then does it matter? Pick a mainstream language (so that there are sufficient resources out there to aid your learning) and dive in!

    Co-creating in the classroom

    I’ve been reading a lot lately about participatory experiences in museums and other public institutions. One fascinating idea is that of “co-creation,” in which the organization partners with visitors/patrons to create content.

    This is a radical departure from the traditional museum experience in which displays are hand-crafted by subject experts and debut in their final, polished form for passive consumption. After reading about museums in which patrons can propose exhibit ideas, then work alongside staff to make them happen, I wondered if the same ideas could be applied in the (college) classroom.

    College students are generally cast in a powerless, passive role. They have paid the entrance fee (tuition), but the design, content, operation, evaluation, and educational goals of the class are entirely out of their hands. A couple of deviations from this pattern that I’ve observed are:

    • Choose Your Own Adventure (as a group): Students vote on a subset of advanced topics to be covered later in the course
    • Student as Presenter: Students each stand up in front of the class and educate their peers on a topic

    The latter is, unfortunately, usually seen as an obligation imposed on the student rather than a chance to express a personal interest or satisfy a personal need. Making the topic a wide-open choice makes matters worse, not better. The atmosphere of judgment and evaluation is too strong.

    How might we experiment with co-creation? How could students offer input on how to tailor the course for their maximal benefit, in combination with the instructor’s experience and knowledge?

    Here are some (untested) ideas for co-creation that I’d like to put out there:

    1. Motivation and content: Instead of assigning tasks that your best guess says will be valuable, take time to find out what students want to get out of the course. Pre-class polls on this subject often fill up with “this class is required for my degree” or “it’s a prereq for something else,” so it may take some prodding to get them to dig deeper to find personal reasons for being there, or things they could get out of it. Examples could help inspire useful answers, especially from previous years’ students. Are there skills they want to gain? Facts they want to know? Methods they want to learn? And why?
    2. Operation: Start the course with a collaborative brainstorm (and whittling down) of what the course rules will be, on the mundane but necessary topics of attendance, turn taking, late assignments, and grading.
    3. Evaluation: Get student input on what they think the weights of the different topics and assignments should be.

    These (and similar) ideas could give students agency, investment, and personalization in ways that just aren’t there in most classrooms today. These traits can foster increased learning and retention.

    For co-creation to be successful, Nina Simon notes, we must truly value participants’ input. We can’t simplify students into blank slates or empty vessels ready to be filled with our wisdom. That sounds preachy (and it’s not a new idea either), but I think it never hurts to take a moment to share genuine respect for and interest in your students’ individual personalities. Do they have hobbies that relate to the course topic? Do they have prejudices about the subject due to your course’s reputation, a sibling’s experience, or simply the fact that it’s a required class that they would not have chosen on their own?

    Stefan Stern warns against expecting the next big thing to spontaneously pop out of co-creative activities. “The real art is in synthesizing all the ideas afterwards and understanding the big, unlooked-for themes that underpin them.” Sounds like good fodder for organizing a syllabus to me!

    While relinquishing control can be a little scary and even more chaotic, I think it can also make the teaching process more fun, inspiring, and educational for the educator. Each offering of the class would be different. We assiduously poll students at the end of the term for the highly prized course evaluations. Why don’t we also assess the course’s value by polling the teachers to find out what they learned, or how they benefited?

    Cataloging on the edge

    The first major assignment for my Cataloging class was to round up 20 books and create catalog entries for them. Any books, so long as no more than three were “literature.” After getting stuck for a while on trying to decide what exactly “literature” was, I settled on my books (mostly non-fiction, which apparently was the goal), and dove in.

    This was hard.

    This was hard because there are no good resources out there (that I know of, or that my class knows of) for exactly how to “catalog a book.” This astonished me, since a system that allows many many people to contribute data is exquisitely vulnerable to any inconsistencies in how those records are created. Surely there are standard rules for what information to include, where to find it, and how to express it?

    Kind of.

    The currently cataloging ruleset, RDA (Resource Description and Access), sets forth guidelines about what kind of content should go into a bibliographic record, but not how to format it. RDA seeks to implement FRBR (Functional Requirements for Bibliographic Records), which is a statement of cataloging philosophy and what user needs are out there. FRBR also contains an entity-relationship diagram that traces out how works, creators, and subjects are (or should be?) connected. FRBR is silent on how to actually create a record, though.

    Further, no real system out there actually implements FRBR yet, and even RDA only spells out a partial path to it (parts of RDA are not yet defined, like what kind of relationships between subjects should be captured).

    In the meantime, real systems use something called MARC (MAchine-Readable Cataloging) to encode bibliographic records. So that’s what we used to catalog our 20 books. MARC provides some guidelines about formatting (e.g., when to end a field with a period and what field separators to use) but is silent on other aspects like capitalization and bigger questions like where to get the required information from. For example, how do you go about extracting the publication date from a book? How should you express the author’s name?

    Here’s where the assignment gets pedagogically interesting, for two reasons.

    First, we were operating at the “pleasantly frustrating” level. James Paul Gee listed this as an effectively learning principle in his guide to “Good Video Games and Good Learning.” He suggested that a good learning challenge stays within, but at the edge of, the student’s “regime of competence.” We weren’t just executing a set of well understood rules; instead, there is a lot of ambiguity and nuance, and each question pushed us to dig deeper.

    Second, we were working with books in the wild. I gather that most cataloging professors assign their students the same set of books to practice cataloging on. The real answer is known, any questions or gotchas have already been anticipated, and the result is a controlled, sandbox experience.

    My professor instead flung the doors wide open and let us each pick our own 20 books, without any sense of what would turn out to be easy or hard to catalog. The result was a chaotic, challenging, and ultimately far more educational experience.

    This approach only worked because we had a discussion forum and a professor who monitored it assiduously. Students plastered the forum with questions. “What if the book is a translation?” “What if the pages aren’t numbered?” “What if there are multiple publishers?” Our professor responded quickly to every question, and over time I realized that I was quite possibly learning more from the forum than from my own small set of 20 books. With 88 students, we had something like 1700 books being catalogued (some are duplicates), and the array of issues that came up was dazzling. It was great to have the practice of actually creating my own records (and hunting down resources to allow me to deal with my books’ issues), but it was also fantastic to get to eavesdrop on my classmates’ questions and learn vicariously through them.

    In that assignment, we only had to create fields for each book’s title, publisher, publication date, etc. The next assignment had us add the authorized form of the author’s name, and we are just about to revisit our records again to add appropriate subject headings. Each iteration makes our records richer and increases our understanding of the cataloging process. And I have to applaud Prof. Mary Bolin for structuring the process in such an interesting and valuable way.

    « Newer entries · Older entries »