What Can Online Course Designers Learn from Research on Machine-Delivered Instruction?

How to create effective and efficient mechanical aids to instruction.
By Julie Vargas

With the growing prevalence of massive open online courses (MOOCs), anyone with an Internet connection can gain access to college-level instruction. Organizations such as GCFLearnFree, Coursera, and Khan Academy offer courses without charge. Colleges and universities, individually and collectively, have scrambled to offer their own courses. Some courses are available without charge, though for a fee the sponsoring institution may offer certificates of satisfactory completion.

How effective are online courses? A 2010 metaanalysis available through the US Department of Education concluded that students taking a college’s online version of a course generally performed as well as those taking the same course in a lecture format. Like their on-campus counterparts, these online courses included individualized help. The online versions typically featured chat sessions, e-mail interaction, Skype, or even section meetings. The course contents were similar. It is not surprising that students performed equally well in the two formats.

With enrollments that can exceed one million students, MOOCs, by contrast, must rely on automation. The current tools available to instructional designers make it possible to automate instruction to adapt to each student’s progress. Games and exercises to improve fluency typically adjust to user speed and accuracy, drawing randomly from item pools at different levels of difficulty. Such programs enable students to answer more rapidly but do not create new skills. MOOCs that offer college-level instruction, however, must teach new competencies..

Behavioral Principle

The research that the American behaviorist B. F. Skinner carried out in the mid-twentieth century sheds light on how students learn and what can be accomplished through automated instruction. Approximating one-on-one tutoring, his procedure begins with the determination of what behavior constitutes competent performance. A human tutor then assesses the student’s current level of performance. Building on the student’s existing skills, the teacher guides the student through increasingly complex tasks that Skinner called “successive approximations.” If the student progresses rapidly, the teacher adjusts by assigning more difficult problems or questions. If the student hesitates or makes mistakes, the teacher finds out what knowledge or skills are lacking, goes back a step, breaks the material into smaller steps, or provides hints or other help. This kind of individualized instruction continually adjusts to student progress.

The steps a teacher provides when working with one student exemplify what Skinner called “shaping,” the process of strengthening the actions that most closely approximate a final desired performance. Shaping does not always require a teacher. A child learns to walk or to throw a ball largely through “natural” consequences that follow the child’s actions. Academic speech, writing, and thinking, by contrast, require help from other people. Skinner and his colleagues found that to understand or to alter behavior that is not reflexive, you have to concentrate on what happens immediately after each action. In research that started with rats and pigeons, he and his colleagues detailed the precise relations between actions and their immediate postcedents. Like geneticists who work with fruit flies and mice, Skinner and his colleagues were discovering general principles applicable to all species.

Origins of Programmed Instruction

Skinner’s involvement in education began when he visited his younger daughter’s fourth-grade class on Father’s Day. Sitting on a little chair, he observed a typical math lesson. The teacher explained the procedure for the day’s problems and gave out worksheets. Skinner watched as some students worked rapidly and others fidgeted or raised their hands for help. Suddenly he realized that the principles of shaping were not being followed. It was not the teacher’s fault: no teacher could shape each student’s behavior in a class of fifteen or twenty. The teacher needed help.

Always a problem solver, Skinner went home and patched together a cardboard gadget he called a teaching machine. Changing his design a dozen times over the next few months, he realized that what was important was not the device but the sequencing of actions and immediate feedback. He called the automated method of teaching “programmed instruction.”

Programmed instruction follows shaping principles, strengthening learner actions (or successive approximations) that build toward a final behavior. Shaping requires constant analysis and problem solving by students—actions that go far beyond the push of a “next” button or the click of a link to another section. Students must demonstrate understanding of the content of each “screen” of programmed instruction. Most programmed instruction in the 1960s asked students to fill in words or phrases. All responses were evaluated immediately. Skinner assumed that success would strengthen newly acquired behavior.

To ensure success, he analyzed data from each screen of programmed instruction and revised the content until errors were almost nonexistent. Still, he made provision for the rare mistake. Items that a particular student missed were presented again until the student answered correctly. As in successful one-on-one instruction, the student had to master each major concept in programmed instruction before more complex material was introduced.

Skinner and fellow behaviorist James G. Holland refined these principles when they converted Skinner’s undergraduate course to programmed instruction. Because computers were not yet available, students worked on teaching machines, writing critical terms on strips of paper that moved under Plexiglas when correct answers were revealed. The strips of paper provided data on how students behaved during instruction as well as at the end of each unit. Problems in understanding a particular step were analyzed and revisions were made until the next semester’s students succeeded. Continual improvement of instructional sequences based on analysis of data was thus added to high levels of response, immediate feedback, and mastery criteria as a requirement of successful programmed instruction.

Features of Effective Online Instruction

Today’s MOOCs and “tutorials” do not follow even the first tenet of successful programmed instruction. MOOCs assume that students learn from watching lectures, animated diagrams, or video presentations. While computer graphics today far surpass those available to yesterday’s lecturers, the format that behaviorologist E. A. Vargas calls the “PAT system” (present, assign, and test) differs little from that found decades or even centuries ago. To improve the effectiveness and efficiency of online courses, designers might profit by revisiting the research on active responding done with programmed instruction.

A range of studies have examined the degree to which achievement depends on responding to ideas and principles during instruction. Around the turn of the twenty-first century, Darrel Bostow and his colleagues at the University of South Florida varied student response density in a series of experiments. One programmed-instruction course taught computer programming. The effectiveness of the version in which every screen required students to respond surpassed that of versions that required responses only to every other screen or to a version with all the words filled in (no overt responding). The more actively students responded during instruction, the better code they wrote for the test and for a new programming task. Students must respond actively to learn effectively.

The particular material to which students respond is important too. To many educators in the last century, writing programmed instruction looked easy. It seemed that all you had to do was to take some text, split it into small parts, and leave out words here and there for students to fill in. Unfortunately, the process is not that simple. The particular words left out and how many are omitted make a difference in how much students learn. Instructional designers must determine the precise features of content responsible for the way students respond. In the heyday of programmed instruction, many so-called programmed lessons did not consider the relationship between student behavior and the features of each screen. Poorly designed programs enabled students to get correct answers for reasons that had little to do with what they were supposed to learn.

To evaluate the degree to which students had to attend to relevant concepts, Holland developed a measure he called the blackout ratio. The premise was simple: if students could respond accurately with material blacked out, the hidden part was not needed for correct response. The higher the blackout ratio, the worse the programming. A statistics program with a high blackout ratio, for example, presented a screen with a nice diagram and more than fifty words of explanation. With all this text blacked out, however, a student could still correctly fill in the one blank: “Thus, there are 3 x 2 x 1 = ____ ways in which 3 balls can fill 3 cells.” No student had to pay attention to the rest of the material to fill in the answer “6.” This statistics program generated an overall blackout ratio that was greater than 70 percent. It was not truly programmed at all.

Perhaps students read material even if they don’t need it to respond. Holland and his colleague Judith Doran tracked student eye movements to learn about reading patterns. They compared eye movements of eighteen college students working on low- versus highblackout versions of the Holland and Skinner program written for Harvard University undergraduates. To control for individual differences, each student took two low and two high blackout units of the same program. Results showed that eye movements depended on which version was being completed. The low blackout ratio produced eye movements showing typical reading. For the high blackout ratio, however, students did not read much of the unneeded material.

Their eyes flickered back and forth around the blank before filling it in. As Doran and Holland put it, “the student learns to read only that part of the text on which the response depends.” Eye-tracking research by computer usability expert J. Neilson shows a similar nonlinear pattern for web pages. To ensure that students read information, teachers must give them immediate reasons for doing so.

Lessons For MOOC Designers

To improve instruction, course designers need information about how students are responding to their lessons. The lecture model requires little or no responding for feedback. Even with data from practice problems or tests, students’ behavior during lectures is invisible to designers. Mastery on quizzes does not reveal whether students could answer correctly before the “lesson.” If students do watch, do they think about the concepts and principles they are supposed to learn? Or will they remember only irrelevant details like a funny joke, a lecturer’s outfit, or spectacular graphics? .
 
A continuing education course for accountants illustrates one way to ensure that students pay attention. The video for the course flashes random passwords briefly on the screen at unpredictable times. A quiz at the end asks for all the passwords. Missing any requires taking the whole unit over again—with new passwords. The intent is to ensure that students watch the entire video, but why require watching? The material presented must be important, or continuing education credit could be granted instead for passing a test. Students could more usefully have been engaged by being required to actively respond to critical information instead of noting “passwords” along the way.
 
What students are asked to do is what they learn. Quizzes and tests may or may not reflect the ultimate purpose of a course, as a physics professor at Harvard found out. Eric Mazur taught a traditional lecture course. He read an article that argued that while physics students could solve problems in diagrams like those in the text he used, they had no idea of what their answers meant in daily life. His first reaction was, “Not my students!” But he gave a test to find out. To his surprise, many of his students could not select the correct answer even to simple questions, such as one asking which path an object would take in falling to earth from a moving airplane. As a result, Mazur completely changed his teaching format. He split his lectures into short sections, each followed by a practical problem based on the principles just presented. Students answered, then discussed their answers with those sitting next to them. Such “peer instruction” increased both the amount of student activity during class and the relevance of the course he taught to real-world problems.
 
Shaping principles do not help an online instructional designer establish objectives. Once objectives are set, however, shaping provides the means for students to reach them. Course designers understand the importance of adjusting teaching to student performance. Courses are given names like Adaptive Learning, Intelligent Tutoring Systems, Differentiated Instruction, or even just Interactive Tutorial. Despite their names, though, few of these courses shape behavior.

Achievable First Steps

Writing programmed instruction is not easy. It requires a clear specification of what behavior should result, an analysis of skills students are likely to have that can be built on, and an analysis of sequential steps needed for building competence. However instruction is delivered, the more actively students respond to relevant features of material being presented, the more they will achieve. Data from each step of student performance reveal details currently invisible to instructional designers and to students themselves. These data enable programs to adjust to the moment-to-moment levels of mastery of critical components of a subject area.
 
Instructional designers cannot be expected to abandon current presentation styles for programmed instruction. They could, however, take a first step, just as any shaping begins with an initial successive approximation. Current lectures could be continued, but with questions interjected at intervals throughout videos, in a format similar to that of Mazur’s physics course. Students in some courses already discuss content through social media with peers who are working on the same problem at the same time elsewhere. Data on student responses during presentations not only aid students with trouble spots but also, when summarized, reveal areas where presentations could be improved.

Most online college courses include practice exercises. Yet, like the assignments that Skinner observed in his daughter’s math class, online practice exercises rarely adjust to student performance. Why not sequence the practice problems from easy to difficult, with branching forward or backward according to student accuracy and speed? The Internet has unlimited potential for adjusting to student performance.
 
The effectiveness and efficiency of instruction increase when students are expected to interact with critical features of a subject. With more student activity, the resulting data reveal strengths and weaknesses of a teaching procedure. And that is the most valuable feedback of all for designers of online courses. 
  
Julie Vargas is president of the B. F. Skinner Foundation and daughter of B. F. Skinner. She is the author, most recently, of Behavior Analysis for Effective Teaching. Her e-mail address is [email protected].