Improving skill courses. So far, so good.

I’ve been improving skill courses since 2007. In a skill course, students learn to do tasks independently. Examples are programming, and statistics. The focus is on deep learning, where students learn how to solve problems.

Why skill courses? First, learning skills helps students change their lives. Learning skills is a key factor in upward mobility (Bensen 2014). This applies regardless of race or ethnicity (Barrow and Rouse (2005). Skills help students achieve financial security, an important goal for undergraduates (Mathieson and Bhargava, 2003). Second, skill courses affect retention. Poor introductory skill courses can discourage students who might otherwise succeed. Third, advanced economy relies on skilled workers (ILO, 2010). Demand for problem solvers is growing (Levy and MurNane, 2004).

Skill courses

A good skill course requires joint responsibility:

  • Authors should write good textbooks, exercises, and other resources.
  • Instructors should help students use their time and resources well.
  • Students should study effectively. They should do exercises, monitor their understanding, self-test, etc.

Students can learn how to learn through books (e.g., Carey, 2013; Brown et al., 2014), videos (e.g., Chew, 2016), and other resources. There are tools to help with self-testing (e.g., Quizlet), randomized practice (e.g., Algebra Touch), and other effective learning techniques.

Authors and instructors have good resources, too, like Ambrose et al. (2010), Bransford et al. (1999), and Wiggins and McTighe (2005). To a large extent, we know what good skills courses look like (Pellegrino and Hilton, 2012). However, there are few complete tools to help authors create good skill courses, with content, exercises, and feedback mechanisms that are practical for instructors and students. There’s software for administering courses (e.g., Moodle), writing interactive content (e.g., iBooks Author), gathering formative feedback (e.g., ASSISTments), and other individual tasks. However, putting the pieces together demands more effort than many faculty will exert.

Over the last ten years, I’ve developed software that helps authors write courses for learning skills like programming. It implements learning science recommendations, like modeling problem solving, big ideas, patterns, and frequent low-stakes formative feedback. It has practical workflows that let a human grader assess student problem solving. A grader can assess 2,000 submissions in one semester, taking about 30 minutes per day.

I’ve written two versions of the software so far. They have been used to create four different skills courses, that have been taught multiple times to real students in credit-bearing classes. Each course is a complete package, a drop-in replacement for a traditional textbook.

Let’s review some of what is known about skill courses.

What makes a good skill course?

A good course has the right content, exercises, and social interaction. Course attributes include:

  1. Explanations that include task contexts (Bransford et al., 1999), encourage mental model development (Soloway, 1986), promote task transfer (Pellegrino and Hilton, 2012), xplain problem solving methods (Yan and Lavigne, 2014), and match students’ changing knowledge state (van Merriënboer, Kirschner and Kester, 2003)
  2. Hands on (Kontra et al., 2015).
  3. Tasks require problem solving (Pellegrino and Hilton, 2012).
  4. Frequent formative feedback (Black and Wiliam, 1998).
  5. Good social environments. Students get one-on-one help when they need it. Instructors are helpful, respectful, and personable (Wang and Eccles, 2013). Students respect each other (Johnson and Johnson, 2002).
  6. Student motivation (Yaeger and Walton, 2011)

The list doesn’t mention particular technology, or whether classes are online, blended, or traditional. It’s best to identify experiences that support effective skill learning, and then choose technology and class organization, not the other way around.

Flipped courses can have these attributes. Students prepare outside class, reading content, watching videos, doing exercises, and whatever else is required. Class time is for active learning. Students get help with exercises, work on labs, work in teams, and so on. Instructors are facilitators, not content providers. Flipped courses make good use of relatively expensive face-to-face time.

Flipped courses can have locked or flexible schedules. Locked schedules have every student focus on the same content at the same time, as in a traditional course. Flexible schedules let students learn at their own pace. Ideally, students can ask for help with any part of the course at any time.

A good skill course is a complete sociotechnical system, requiring all six attributes to at least some extent. If the course is not hands-on, or exercises only require rote memorized responses, students will not learn independent problem solving. If instructions are intimidating or disrespectful, students will not ask for help, undermining a critical aspect of flipped classes. If students don’t respect each other, social norms might prevent students from asking for help, again undermining the course.

Poor social interaction can be particularly problematic for women, students of color, and other underserved groups. Flipped promotes one-on-one interaction between students and instructors. Instructor who are personable, respectful, and open can help diverse students succeed.

My long term goal is to help people build, run, and participate in effective skill courses.

  • Help authors write good content and exercises.
  • Help graders give personal feedback efficiently.
  • Help instructors run courses that are rewarding for students and themselves.
  • Help students become skilled problem solvers.

My work so far

I began by studying literature in educational psychology, developmental psychology, cognitive psychology, social psychology, educational design, and other fields. I examined the general literature (e.g., Bransford, Brown, and Cocking, 2000; Ambrose et al., 2010), and literature linked to particular fields, such as physics (e.g., Crouch and Mazur, 2001), and computer science (e.g., Sorva, 2013). Next, I developed an approach to skill courses having the attributes given above. The following summarizes that work.

CoreDogs

I wrote Web-based software to implement my ideas. The first system was CoreDogs. CoreDogs focused on (1) content authoring, (2) exercises, and (3) content use. I used CoreDogs to create two credit-bearing courses, one on client-side Web technology (HTML, CSS, and JavaScript), and the other on server-side technology (PHP and MySQL). Both were taught several times.

Content authoring. CoreDogs helped authors write lessons. High quality lessons help students build conceptual models, that exercises expand and make concrete. Written lessons focus on a few core ideas (the Core in CoreDogs), with students spending most of their time on exercises.

CoreDogs helped with mundane work, as does any good writing software. It created tables of contents automatically, supported keywords, let authors change the order of lessons with a drag-and-drop interface, and so on.

What set CoreDogs apart were features for writing lessons for skill courses. For example, CoreDogs helped authors use pseudostudents, virtual “students” who take the course along with real students. They helped with criteria 1, 5, and 6 above: good explanations, good social environments, and enthusiastic students.

Figure 1 shows a student’s emotional reaction to the amount of content in a course. Renata and CC are pseudostudents. Kieran is a pseudoinstructor.

Figure 1. Pseudostudents and a pseudoinstructor

The conversation implies that negative emotional reactions are normal, and can be discussed openly. (Yes, the pseudostudents were dogs. It seemed like a good idea at the time.)

Pseudostudents had other uses.They asked about concepts, sometimes making mistakes, sometimes having insights. They worked on exercises, with annotations from the author pointing out what they did right and wrong. Pseudostudents asked why they were learning the content, or why the course was designed as it was. The answers affected their enthusiasm for the course.

Pseudostudents modeled social interaction. They were not afraid to admit ignorance, or challenge what they were told. However, they were always respectful of each other and the instructor.

In later courses pseudostudents were people, not dogs. I began with cartoons, then switched to photographs. Pseudostudents modeled interaction with women and minorities. They were female, male, asian, black, white, older, younger, outgoing, and introverted. The pseudoinstructor in one course was a young black woman. Another course had an older black man as the pseudoinstructor. Social interaction could be playful at times, but was always respectful.

Pseudostudents were shown as captioned images, with text and special formatting (see Figure 1). Could they have been made with any good editing software? Of course. However, CoreDogs made it easy to add them, change them, change the text, and even change the format of all pseudostudents at once. Authors did not have to work with fonts, colors, images, etc. CoreDogs handled the details.

Exercises. CoreDogs gave exercises special status, given their importance for skill learning. Authors created exercises as separate objects, instead of just part of the content. They could embed exercises in the content with a simple reference, the equivalent of “insert exercise 89 here.”

Treating exercises this way had two advantages. First, they could evolve separately from the content, and be substituted for each other as needed. Second, students and instructors could access exercises in two ways: as part of lessons, and in separate lists. Students could see a list of exercises, showing which they had yet to complete. If an exercise reminded them of one they had done earlier, they could quickly access the prior exercise. (Thinking about such behavior led to a pattern system, described later.)

Content use. CoreDogs helped students use the content. A navigation system helped them find lessons quickly. CoreDogs remembered the last lesson students saw, and could take them there the next time they logged in. Authors could tag lessons and exercises with keywords. Students could use a “keyword cloud” to find relevant lessons quickly. There was also a general search system.

Implementation. I didn’t write the code from scratch. To save time, Icustomized Drupal version 6. Drupal is an open source content management system. Every Ivy League school and more than a quarter of all .EDU sites use Drupal (ImageX, 2013). It is widely used in government (e.g., the White House), and the private sector (e.g., the NBA).

CoreDogs was retired in 2011. Why?

  • It became clear to us that formative feedback from people was necessary. CoreDogs’ basic system was not enough.
  • Drupal 7 replaced D6. D7 improved content modeling and user experience.
  • The two courses I created with CoreDogs became obsolete.

CyberCourse 1

The next project was CyberCourse 1, built on D7. Of the many changes, the most important was technology enabled formative feedback (Feng, Gobert, and Schank, 2014). All exercises are hands-on; students create, fix, or improve artifacts. Every student gets human feedback on every submission. CyberCourse 1 automates the process where possible, but leaves a human grader to judge student problem solving. CyberCourse 1 supports the whole process, from exercise and rubric creation, to submission, grading, student feedback review, and resubmission. Figure 2 shows an author adding a rubric item. Graders click rubric items to assess submissions (Figure 3), adding comments as needed. Students can resubmit as many times as they like until they complete the exercise.

Figure 2. Author creating a rubric item for an exercise
Figure 3. Grading an exercise with clickable rubrics

The process is efficient. A grader can grade about 2,000 submissions per semester for one programming course. With CyberCourse’s tools, it takes 20 to 30 minutes per day to give personal feedback for every submission from every student.

The grader need not be the instructor. In 2016, I hired a remote grader for an information systems course. It worked well. The only concerns raised by students were about exercise specifications, which were readily resolved.

(There are some short videos showing the grading system in action.)

CyberCourse 1 improved the authoring tools as well. For example, problem-solving patterns help students with task transfer (Sontag, 2007). Patterns, called schemas in cognitive psychology, are ways of doing tasks that practitioners find useful. They often emerge from communities of practice.

Authors create patterns as objects separate from regular content, and embed them in lessons as required. The separation allows CyberCourse 1 to create a pattern catalog (Figure 4) students use when doing tasks.

Figure 4. A pattern catalog helps students use patterns from the programming community of practice

CyberCourse 1 has similar tools for big ideas, a system for creating and awarding badges, and many other features. Naturally, CyberCourse 1 implements other CoreDog other features as well.

Two courses were built with CyberCourse 1: a programming course, and a course in information systems. Both are still being offered. The programming course is on its second major version, although both courses have changed over time. CyberCourse 1 makes updates quick and easy.

Development of the CyberCourse 1 software has stopped… but I have plans.

Effective?

CoreDogs and CyberCourse 1 gather a large amount of data. For example, the grading system tracks student performance down to the rubric item level. I haven’t done much with the data, apart from basic analysis to show students how doing exercises affects exam scores.

My own classes have improved. I have had to adjust my expectations of what students can do. I expect more from students than in the past, and they deliver. The change in student morale (and my own!) is also striking. I don’t have formal evidence, however, only my own experience.

Why haven’t I done more? Simply put: time. I spend time implementing practices that research already shows are effective. That does not guarantee that effects observed by others apply in our contexts, of course. Still, I think this approach is the best use of my time.

What now?

CyberCourse 1 is a little buggy, and the interface could use some work. It’s not something I would want other people to use. However, I’m planning CyberCourse 2, based on Drupal 8, and some newer ideas from the literature. It will be available as open source. Eventually. Hopefully.

Your thoughts?

Please comment if you have ideas to share.

References

Ambrose, S., Bridges, M., Lovett, M., DiPietro, M., & Norman, M. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey Bass

Barker, L., Lynnly Hovey, C., and Gruning, J. (2015), “What Influences CS Faculty to Adopt Teaching Practices?,” SIGCSE ’15, March 4–7, 2015, Kansas City, MO, USA.

Barrow, L., and Rouse, C. (2005), “Do returns to schooling differ by race and ethnicity?” American Economic Review, 95(2), 83–87. https://hbr.org/2014/08/employers-arent-just-whining-the-skills-gap-is-real

Bessen, J. (2014) “Employers Aren’t Just Whining — the “Skills Gap” Is Real,” Harvard Business Review.

Black, P., and Wiliam, D. (1998). “Assessment and classroom learning,” Assessment in Education: Principles, Policy and Practice, 5, 7–74.

Bransford, J. D., Brown, A. L., and Cocking, R. R. (2000), How People Learn: Brain, Mind, Experience and School, National Academies Press.

Brown, P. C., Roediger III, H. L., McDaniel, M. A. (2014), Make It Stick: The Science of Successful Learning, Harvard University Press.

Carey, B. (2013), How we learn: The Surprising Truth About When, Where, and Why It Happens, Random House.

Caulfield, M., “Choral Explanations,” May 13, 2016, https://hapgood.us/2016/05/13/choral-explanations/.

Center for Applied Special Technology (2011). Universal Design for Learning Guidelines version 2.0, Wakefield, MA. See http://www.udlcenter.org/aboutudl/udlguidelines.

Crouch, C. H., and Mazur, E. (2001), “Peer Instruction: Ten years of experience and results,” American Journal of Physics, 69, 970

Chew, S. (2016), “How to Get the Most Out of Studying,” video series, Samford University, https://www.samford.edu/departments/academic-success-center/how-to-study.

Diaz, V., Finkelstein, J., and Manning, S. (2015), “Developing a Higher Education Badging Initiative,” EDUCAUSE, August, 2015

Doane, D., Mathieson, K., and Tracy, R. (2000), Visual Statistics 2.0 (McGraw-Hill).

Eberly Center (2015), “What is the difference between formative and summative assessment?” https://www.cmu.edu/teaching/assessment/basics/formative-summative.html, accessed October 31, 2016.

Feng, M., Gobert, J., and Schank, P. (2014). CIRCL Primer: Technology Enabled Formative Assessment. In CIRCL Primer Series. Retrieved from http://circlcenter.org/technology-enabled-formative-assessment/.

Fincher, S., Richards, B., Finlay, J., Sharp, H., and Falconer, I. (2012) “Stories of change: How educators change their practice.” In Frontiers in Education Conference (FIE), 2012, pages 185–190.

ImageX (2013), “Why Drupal is Dominating the Higher Education Sector,” Sep 30, 2013, http://imagexmedia.com/blog/2013/09/why-drupal-dominating-higher-education-sector

International Labour Office (2010), A Skilled Workforce for Strong, Sustainable and Balanced Growth A G20 Training Strategy.

Johnson, D. W., and Johnson, R. T. (2002), “Learning Together and Alone: Overview and Meta-analysis,” Asia Pacific Journal of Education Vol. 22 , Iss. 1, 95–105

Kirschner, P., Sweller, J., and Clark, R. (2006), “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching,” Educational Psychologist, 41 (2), 75–86.

Kontra, C., Lyons, D. J., Fischer, S. M., and Beilock, S. L. (2015), “Physical Experience Enhances Science Learning,” Psychological Science, Vol 26, Issue 6, pp. 737–749

Levy, F., and Murnane, R.J. (2004). The new division of labor: How computers are creating the next job market. Princeton, NJ: Princeton University Press.

Marks, J., Bernett, D., and Chase, C.C. (2016), “The Invention Coach: Integrating data and theory in the design of an exploratory learning environment,” International Journal of Designs for Learning, 7(2), 74–92.

Mathieson, K. (2017), “Choral explanations in the wild,” The Higher Education Revolution, January 1, 2017, https://higheredrevolution.com/choral-explanations-in-the-wild-acb4ef0742b#.pw5tqki72.

Mathieson, K., and Bhargava, M. (2003), “Do Our Students Want Values Programs?” Journal of College and Character, V. 2.

National Center on Universal Design for Learning (2013), Definition of UDL, http://www.udlcenter.org/aboutudl/udldefined, accessed November 1, 2016.

Open Badges in Higher Education (2015), “About CyberCourse,”OBHE case study, (https://sites.google.com/site/openbadgesinhighereducation/cyco)

Pellegrino, J. W., and Hilton, Margaret L. (2012). Education for Life and Work: Developing Transferable Knowledge and Skills in the 21st Century. Washington, D.C: The National Academies Press.

Penuel, W. R., Roschelle, J., and Shechtman, N. (2007), “Designing formative assessment software with teachers: an analysis of the co-design process,” Research and Practice in Technology Enhanced Learning, 02, 51.

Schwartz, D. L., Chase, C. C., Oppezzo, M. A., andChin, D. B. (2011). “Practicing versus inventing with contrasting cases: The effects of telling first on learning and transfer,” Journal of Educational Psychology, 103(4), 759–775.

Soloway, E. (1986), “Learning to program = learning to construct mechanisms and explanations,” Communications of the ACM Volume 29 Issue 9, Sept. 1986 Pages 850–858

Sontag, M.E. (2007) Facilitating learning transfer through students’ schemata. Ph.D. thesis, Capella University, 2007, http://www.mariesontag.com/SontagDiss.pdf.

Sorva, J. (2013), “Notional machines and introductory programming education,” ACM Transactions on Computing Education, Volume 13 Issue 2, June 2013, Article №8

Sengupta-Irving, T. and Enyedy, N. (2015), “Why Engaging in Mathematical Practices May Explain Stronger Outcomes in Affect and Engagement: Comparing Student-Driven With Highly Guided Inquiry,” Journal of the Learning Sciences, Volume 24, 2015 — Issue 4.

Wang, M. and Eccles, J. S. (2013), “School context, achievement motivation, and academic engagement: A longitudinal study of school engagement using a multidimensional perspective,” Learning and Instruction, Volume 28, December 2013, Pages 12–23

van Merriënboer, J. J. G., Kirschner, P. A., and Kester, L. (2003), “Taking the Load Off a Learner’s Mind: Instructional Design for Complex Learning,” EDUCATIONAL PSYCHOLOGIST, 38(1), 5–13

Wiggins, G. and McTighe, J. (2005). Understanding by Design (Expanded 2nd ed.). Alexandria, Virginia: Association for Supervision and Curriculum Development.

Yaeger, D.S., and Walton, G.M. (2011). Social-psychological interventions in education: They’re not magic. Review of Educational Research, 81, 267–301.

Yan, J.; Lavigne, N. C. (2014), “Promoting College Students’ Problem Understanding Using Schema-Emphasizing Worked Examples,” Journal of Experimental Education, v82 n1 p74–102 2014.