Friday, October 31, 2014

To Improve Student Performance, Start Thinking Like a Coach



I have a confession to make. I was wrong. You see, I once thought that teaching was lecturing, and I thought that because that is how my graduate mentors taught me to teach.
But I was wrong. Studies have shown that lecturing has little to do with teaching. A University of Maryland study found that right after a physics lecture, almost none of the students could answer the question: “What was the lecture you just heard about?” Another physics professor simply asked students about the material that he had presented only 15 minutes earlier, and he found that only ten percent showed any sign of remembering it (Freedman, 2012).
So what is teaching? John Hattie compared more than 100 factors related to student achievement from over 180,000 studies and discovered that feedback on student work had the most effect on learning (2009).
OK, I give margin comments like “vague” or “needs synthesis” on student work, so I’ve got that covered, right? Unfortunately, the brief margin comments that most faculty give on student work are almost completely unhelpful to the student. The student who sees “vague” thinks to themselves “What is vague about it? It’s not vague to me. Why is it vague to you?” The student who did not include a synthesis might not know what a synthesis is.
Simply pointing out a student’s errors is not all that useful to the student, but we do it on the mistaken belief that the purpose of feedback is to justify the grade. We subtracted points for 10 things, and so we need to list them in the margins.
But feedback is fundamentally different from grading, and conceptually separating the two is a key to great teaching. Grades are backward facing—they are an evaluation of past performance. Feedback is forward facing—it is information aimed at improving performance in the future.
Step away from the red pen
As Grant Wiggins notes, when it comes to giving effective feedback, the key is to stop thinking like a grader, and start thinking like a coach (2012). Coaches are fundamentally teachers, but they spend little time lecturing or grading. Instead, they teach through feedback. Ninety percentage of their teaching is done by watching a player’s performance and giving feedback on what they did right or wrong, and how to improve.
Importantly, the kind of feedback they give to players is fundamentally different from the feedback most teachers give to their students. A coach doesn’t say “you’re standing wrong” and walk away, similar to a teacher leaving a margin comment that a student’s work is “vague.” That would not be helpful. Coaches instead say precisely what the player is doing wrong, and what they should be doing instead. For instance, a soccer coach might say:
“You are allowing goals because your technique is wrong. You are holding your hands down by your side like this. Because of this, you can’t get your hands up fast enough to block a high kick toward the corners. You need to hold your hands higher like this, and keep your knees bent like this so that you can react quickly to the shot. Now you try it.”
Note how the coach is not just telling the player what she did wrong, but also showing her the proper way to do it. As teachers we spend a lot of time telling students what they did wrong, but very little time showing them what doing it right looks like. Modeling good work is a key component of feedback—and improving student or player performance.
A coach also doesn’t give the player a laundry list of things to work on at once, but rather limits his or her feedback to one or two things for the player to work on at a time. For instance, the HBO series “Hard Knocks” follows an NFL team in the preseason. One episode showed a rookie running back talking to his position coach. The coach said to the rookie “What one thing are we going to work on today?,” to which the rookie replied “Ball control.” “Good,” the coach said. “That’s the one thing we’re going to work on today.”
This is instructive. The rookie does not just have one thing to learn–the rookie probably has 30 things to learn. But when it comes to improving performance, the coach doesn’t expect the player to work on all of those at once. The coach knows that improvement comes sequentially by focusing on a limited number of things at a time.
Similarly, the best way to produce improvement in the student’s performance is to focus your feedback on one or two things at most to work on per assignment. Yes, you provide a grade and a brief account of how it was determined. But feedback is separate from that account. Like the coach, you are essentially saying to the student “I’ve listed the things you will need to work on. Now we will work on the first one.”
In essence, you turn your attention from the past to the future, and pick one or two major themes relating to problems in the student’s performance to provide feedback on. Perhaps you want to focus on how to frame a thesis. You might talk about what a thesis is, why the work lacked one, what thesis for the work would look like, and how to put together a thesis in the future.
Note that you are spending time discussing the process of developing the work. Too often we only discuss the product, not the process that went into developing it. The product is a result of the process, so without discussing process, we are not providing information that the student can use to do better in the future.

Why students are grade-obsessed
Conceptually divorcing feedback from grading will also start to reduce students’ grade-obsession. Faculty often complain that students are grade-obsessed, but how did they become that way? They were not born that way. They did not drop out of the womb grade-obsessed. They were taught that, and to a great extent they were taught that by us when we put all feedback in service to the grade. We do it when we spend our first day of class talking about the grading system. We are telling them that the point of education is the grade.
We also teach students to be grade-obsessed by creating grading systems to preserve their errors. Student’s come into our offices to brow-beat us into raising a poor grade on an exam because they know that the grade will get carried forward to their final grade. They need to do something about that grade itself—not the learning.
By contrast, if a rookie running back starts camp without knowing pass patterns, but gets them down by the end, the coach does not say that “I can’t start him because his understanding is the average between not knowing at the beginning and knowing at the end.” Players are evaluated by the endpoint of their learning curve, not its average.
We all learn by our mistakes, and should encourage our students to make mistakes in order to learn. But instead we have a system that preserves, and hence punishes, students for their mistakes. Then we wonder why they are grade-obsessed.

The Secret of Self-Regulated Learning



Self-regulated learning is like your own little secret. It stirs from within you, and is the voice in your head that asks you questions about your learning.
More formally, self-regulated learning is the conscious planning, monitoring, evaluation, and ultimately control of one’s learning in order to maximize it. It’s an ordered process that experts and seasoned learners like us practice automatically. It means being mindful, intentional, reflective, introspective, self-aware, self-controlled, and self-disciplined about learning, and it leads to becoming self-directed.
Another secret about self-regulated learning is its strong positive impact on student achievement. Just the cognitive facet of it, metacognition, has an effect that’s almost as large as teacher clarity, getting feedback, and spaced practice and even larger than mastery learning, cooperative learning, time on task, and computer-assisted instruction (Hattie, 2009).
Self-regulated learning also has meta-emotional and environmental dimensions, which involve asking oneself questions like these:
  • How motivated am I to do the learning task, and how can I increase my motivation if I need to?
  • If my confidence in my ability to learn this material sags, how can I increase it without becoming overconfident?
  • Am I resisting material that is challenging my preconceptions?
  • How am I reacting to my evaluation of my learning?
  • How can I create the best, most distraction-free physical environment for the task?
Metacognitive questions include these:
  • What is the best way to go about this task?
  • How well are my learning strategies working? What changes should I make, if any?
  • What am I still having trouble understanding?
  • What can I recall and what should I review?
  • How does this material relate to other things I’ve learned or experienced?
Asking oneself these questions also constitutes elaborative rehearsal, which is the thinking process that moves new knowledge into long-term memory.
Just because we may practice self-regulated learning doesn’t mean our students do. Most of us were among the best students, especially in college, and the best students can become the worst teachers because we quickly knew how to master the material.
In fact, few of our students demonstrate self-regulation – not even those in professional schools. When asked to identify the factors they considered important in their learning, 132 veterinary students most commonly cited the quality of their faculty’s instruction, not their own effort or learning skills (Ruohoniemi & Lindblom-Ylänne, 2009). Not surprisingly, younger, undergraduate students have the same mind set. They see learning as something that is “happening” to them, and our job is to make it happen and make it easy. After all, learning was easy in elementary and high school, so why should it require much time and hard work now?
How do you get students to practice self-regulated learning? First, you explain to them what it is and how it will benefit them and then have students do self-regulated learning activities in class and as homework. Then you wait for them to see the good results.
Students don’t mind these assignments. They’re short, low-stress, and worth a point or two, and students learn about themselves. You don’t mind them either because, with 90% of them, you just give credit for completion: pass/fail, all points or no points. Most in-class activities don’t even require this. You need only to grade the major reflective meta-assignments, the kind that accompany service-learning, problem-based learning, or a lengthy simulation.
Let’s consider a few proven self-regulated learning activities and assignments; many more are in Creating Self-Regulated Learning: Strategies for Strengthening Students’ Self-Awareness and Learning Skills (Stylus, 2013):
  • Students answer two or three reflective questions on the reading or podcast.
  • They write about what they learned by doing an assignment.
  • They re-do the same or similar problems to the ones they miss on their homework and exams and explain the proper procedure.
  • They describe their reasoning process in solving a “fuzzy” problem – how they defined the problem, decided which principles and concepts to apply, developed alternative approaches and solutions, and assessed their feasibility, trade-offs, and relative worth

Learning on the Edge: Activities to Promote Deep Learning



The explosion of educational technologies in the past decade or so has led everyone to wonder whether the landscape of higher education teaching and learning will be razed and reconstructed in some new formation. But whatever changes might occur to the learning environments we construct for our students, the fundamental principles according to which human beings learn complex new skills and information will not likely undergo a massive transformation anytime soon. Fortunately, we seem to be in the midst of a flowering of new research and ideas from the learning sciences that can help ensure that whatever type of approach we take to the classroom—from traditional lecture to flipped classes—can help maximize student learning in our courses.
One fascinating implication of this growing body of research for me has been a greater awareness of the edges of a traditional class. Environmental biologists have dubbed landscapes that sit on the edge of two different ecosystems (such as a forest and a grasslands environment) an ecotone. These spaces are known for having rich biological diversity, because they can support creatures from both sides of the ecotone, and encourage mixing between the bordering zones. The especially rich nature of the ecotone has also become known as the “edge effect.”
The ecotone of a traditional college class would be the first and last few minutes of the class session, when students are walking in the door from their busy lives outside of the classroom—coming from meals with friends, from exercise or sports activities, from socializing either in person or through their phones—and entering this more formal learning space. Too often these first and last minutes of class are frittered away with administrative details, hurried reminders about due dates or admonitions about upcoming assignments. But what if we saw those ecotones of the classroom exactly as we saw them in the natural world—as especially rich and fertile periods, ones in which we can begin and end the process of promoting deep learning for our students?
Two key cognitive activities seem to me especially promising in terms of their ability to maximize student learning in the ecotones of higher education: predicting and retrieving.
Predicting. In a 2009 article in the Journal of Experimental Psychology, Kornell, Hays, and Bjork describe a series of experiments in which they gave participants test questions on subject matter before they had the opportunity to study or learn it correctly. One of their interests was determining whether incorrect answers on a pre-test—which are likely to happen when the subjects have not yet been exposed to the material—would create further difficulties for the learner down the line. In other words, if learners give a wrong answer on a pre-test, will that answer stick in their heads and make them more likely to repeat it on subsequent exams?
Not only did the researchers discover that wrong answers on pre-tests do not interfere with subsequent learning—they also discovered that asking learners questions about subject matter before they learned it actually improved their performance on subsequent tests. The authors speculate that this happens because an initial attempt to construct a response to a question “create[s] a fertile context for encoding the answer when it is presented.” In other words, the learner confronted with a difficult question marshals associated ideas and facts in their effort to come up with an answer; in doing so, she prepares a “fertile context” which enables her to learn the true answer more deeply when she hears it.
The activity of prediction seems tailor-made for the edges of a class period. In the opening moments, we can ask students to make predictions about problems that will be solved in class, about how theories from last night’s reading might apply to current events, or about how an experiment will turn out. In the closing moments, we can ask them to speculate about how the novel they are finishing for the next class will end, about how the theory presented today might be critiqued in a subsequent class, or about how a course concept might predict future political events.


Articulating Learning Outcomes for Faculty Development Workshops

he use of student learning outcomes (SLOs) is commonplace at regionally accredited colleges and universities in the United States. I have been working with SLOs in one form or another for the past decade, even before they became fashionable. Many years ago, while I was an instructor in the US Navy, SLOs were called Terminal Objectives. After the service, I taught GED classes and at that time SLOs were referred to as Learning Goals. Regardless of the latest trendy technical name, SLOs are clear statements that describe the new skills students should be able to demonstrate as a result of a learning event such as a college course (Ewell, 2001). Whether teaching online, on-ground, or via a blended environment, the importance of defining the intended outcomes, before instruction takes place, cannot be overstated because SLOs identify fundamental and measurable student skills, help outline needed curricular content, and define appropriate assessment.
This article, however, is not about the SLOs we use in our classrooms as we are all very likely already acquainted with this process; it is instead about employing similar outcomes-based tactics in the practical development, facilitation, and assessment of faculty development. As much as our students need effective instruction, faculty members need high-quality training as well. From federal compliance topics such as FERPA (Family Educational Rights and Privacy Act) to instructional strategies related to classroom management, active learning, and technology, to name just a few, there is no shortage of competencies faculty need to develop in order to function well in any learning environment.
Driscoll and Wood (2007) defined the key features of learner-centered, outcomes-based instruction as follows:
  • Faculty clearly communicate the intended outcomes of each lesson in advance
  • The stated outcomes are accessible and made public
  • Students have clear expectations and understand the purpose of the instruction
  • Students’ progress is determined by the achievement of learning outcomes
  • Assessment results are analyzed and used to improve curricula and align instruction
How far of a conceptual leap would it be to apply these same features to our own development as faculty members? As an instructor, I would certainly appreciate it if (a) the intended outcomes of my own training were communicated in advance; (b) if the outcomes of my training were accessible; (c) if I had clear expectations and understood the purpose of my training; (d) if my progress as an instructor was determined by the achievement of clear training outcomes; and especially (e) if the assessment results of my own training was analyzed and used to improve future training. Take a moment to answer the following questions as you reflect on past training sessions you attended:
  • How was the training announced? Were the expected outcomes of the training communicated in advance or was it via an email that read something to the effect of, “let’s get together and chat about FERPA”?
  • How was the training presented? Were the training outcomes listed on PowerPoint slides? If not, were they explained verbally? A well-defined outcome for FERPA training would be for example, “By the end of this training you will be able to apply FERPA policy to determine when and when not to disclose student information.” Was the training engaging, relevant, and current? Did you have any input in its content?
  • How were the skills you gained during training later assessed? Through classroom observations that focused particular attention on the application of the new skills

he use of student learning outcomes (SLOs) is commonplace at regionally accredited colleges and universities in the United States. I have been working with SLOs in one form or another for the past decade, even before they became fashionable. Many years ago, while I was an instructor in the US Navy, SLOs were called Terminal Objectives. After the service, I taught GED classes and at that time SLOs were referred to as Learning Goals. Regardless of the latest trendy technical name, SLOs are clear statements that describe the new skills students should be able to demonstrate as a result of a learning event such as a college course (Ewell, 2001). Whether teaching online, on-ground, or via a blended environment, the importance of defining the intended outcomes, before instruction takes place, cannot be overstated because SLOs identify fundamental and measurable student skills, help outline needed curricular content, and define appropriate assessment.
This article, however, is not about the SLOs we use in our classrooms as we are all very likely already acquainted with this process; it is instead about employing similar outcomes-based tactics in the practical development, facilitation, and assessment of faculty development. As much as our students need effective instruction, faculty members need high-quality training as well. From federal compliance topics such as FERPA (Family Educational Rights and Privacy Act) to instructional strategies related to classroom management, active learning, and technology, to name just a few, there is no shortage of competencies faculty need to develop in order to function well in any learning environment.
Driscoll and Wood (2007) defined the key features of learner-centered, outcomes-based instruction as follows:
  • Faculty clearly communicate the intended outcomes of each lesson in advance
  • The stated outcomes are accessible and made public
  • Students have clear expectations and understand the purpose of the instruction
  • Students’ progress is determined by the achievement of learning outcomes
  • Assessment results are analyzed and used to improve curricula and align instruction
How far of a conceptual leap would it be to apply these same features to our own development as faculty members? As an instructor, I would certainly appreciate it if (a) the intended outcomes of my own training were communicated in advance; (b) if the outcomes of my training were accessible; (c) if I had clear expectations and understood the purpose of my training; (d) if my progress as an instructor was determined by the achievement of clear training outcomes; and especially (e) if the assessment results of my own training was analyzed and used to improve future training. Take a moment to answer the following questions as you reflect on past training sessions you attended:
  • How was the training announced? Were the expected outcomes of the training communicated in advance or was it via an email that read something to the effect of, “let’s get together and chat about FERPA”?
  • How was the training presented? Were the training outcomes listed on PowerPoint slides? If not, were they explained verbally? A well-defined outcome for FERPA training would be for example, “By the end of this training you will be able to apply FERPA policy to determine when and when not to disclose student information.” Was the training engaging, relevant, and current? Did you have any input in its content?
  • How were the skills you gained during training later assessed? Through classroom observations that focused particular attention on the application of the new skills? A quiz a few weeks after the training? By reviewing students’ related comments on end-of-course critiques?
- See more at: http://www.facultyfocus.com/articles/faculty-development/articulating-learning-outcomes-faculty-development-workshops/?ET=facultyfocus:e118:204696a:&st=email#sthash.G1ZxeBR5.dpuf
he use of student learning outcomes (SLOs) is commonplace at regionally accredited colleges and universities in the United States. I have been working with SLOs in one form or another for the past decade, even before they became fashionable. Many years ago, while I was an instructor in the US Navy, SLOs were called Terminal Objectives. After the service, I taught GED classes and at that time SLOs were referred to as Learning Goals. Regardless of the latest trendy technical name, SLOs are clear statements that describe the new skills students should be able to demonstrate as a result of a learning event such as a college course (Ewell, 2001). Whether teaching online, on-ground, or via a blended environment, the importance of defining the intended outcomes, before instruction takes place, cannot be overstated because SLOs identify fundamental and measurable student skills, help outline needed curricular content, and define appropriate assessment.
This article, however, is not about the SLOs we use in our classrooms as we are all very likely already acquainted with this process; it is instead about employing similar outcomes-based tactics in the practical development, facilitation, and assessment of faculty development. As much as our students need effective instruction, faculty members need high-quality training as well. From federal compliance topics such as FERPA (Family Educational Rights and Privacy Act) to instructional strategies related to classroom management, active learning, and technology, to name just a few, there is no shortage of competencies faculty need to develop in order to function well in any learning environment.
Driscoll and Wood (2007) defined the key features of learner-centered, outcomes-based instruction as follows:
  • Faculty clearly communicate the intended outcomes of each lesson in advance
  • The stated outcomes are accessible and made public
  • Students have clear expectations and understand the purpose of the instruction
  • Students’ progress is determined by the achievement of learning outcomes
  • Assessment results are analyzed and used to improve curricula and align instruction
How far of a conceptual leap would it be to apply these same features to our own development as faculty members? As an instructor, I would certainly appreciate it if (a) the intended outcomes of my own training were communicated in advance; (b) if the outcomes of my training were accessible; (c) if I had clear expectations and understood the purpose of my training; (d) if my progress as an instructor was determined by the achievement of clear training outcomes; and especially (e) if the assessment results of my own training was analyzed and used to improve future training. Take a moment to answer the following questions as you reflect on past training sessions you attended:
  • How was the training announced? Were the expected outcomes of the training communicated in advance or was it via an email that read something to the effect of, “let’s get together and chat about FERPA”?
  • How was the training presented? Were the training outcomes listed on PowerPoint slides? If not, were they explained verbally? A well-defined outcome for FERPA training would be for example, “By the end of this training you will be able to apply FERPA policy to determine when and when not to disclose student information.” Was the training engaging, relevant, and current? Did you have any input in its content?
  • How were the skills you gained during training later assessed? Through classroom observations that focused particular attention on the application of the new skills? A quiz a few weeks after the training? By reviewing students’ related comments on end-of-course critiques?
- See more at: http://www.facultyfocus.com/articles/faculty-development/articulating-learning-outcomes-faculty-development-workshops/?ET=facultyfocus:e118:204696a:&st=email#sthash.G1ZxeBR5.dpuf
he use of student learning outcomes (SLOs) is commonplace at regionally accredited colleges and universities in the United States. I have been working with SLOs in one form or another for the past decade, even before they became fashionable. Many years ago, while I was an instructor in the US Navy, SLOs were called Terminal Objectives. After the service, I taught GED classes and at that time SLOs were referred to as Learning Goals. Regardless of the latest trendy technical name, SLOs are clear statements that describe the new skills students should be able to demonstrate as a result of a learning event such as a college course (Ewell, 2001). Whether teaching online, on-ground, or via a blended environment, the importance of defining the intended outcomes, before instruction takes place, cannot be overstated because SLOs identify fundamental and measurable student skills, help outline needed curricular content, and define appropriate assessment.
This article, however, is not about the SLOs we use in our classrooms as we are all very likely already acquainted with this process; it is instead about employing similar outcomes-based tactics in the practical development, facilitation, and assessment of faculty development. As much as our students need effective instruction, faculty members need high-quality training as well. From federal compliance topics such as FERPA (Family Educational Rights and Privacy Act) to instructional strategies related to classroom management, active learning, and technology, to name just a few, there is no shortage of competencies faculty need to develop in order to function well in any learning environment.
Driscoll and Wood (2007) defined the key features of learner-centered, outcomes-based instruction as follows:
  • Faculty clearly communicate the intended outcomes of each lesson in advance
  • The stated outcomes are accessible and made public
  • Students have clear expectations and understand the purpose of the instruction
  • Students’ progress is determined by the achievement of learning outcomes
  • Assessment results are analyzed and used to improve curricula and align instruction
How far of a conceptual leap would it be to apply these same features to our own development as faculty members? As an instructor, I would certainly appreciate it if (a) the intended outcomes of my own training were communicated in advance; (b) if the outcomes of my training were accessible; (c) if I had clear expectations and understood the purpose of my training; (d) if my progress as an instructor was determined by the achievement of clear training outcomes; and especially (e) if the assessment results of my own training was analyzed and used to improve future training. Take a moment to answer the following questions as you reflect on past training sessions you attended:
  • How was the training announced? Were the expected outcomes of the training communicated in advance or was it via an email that read something to the effect of, “let’s get together and chat about FERPA”?
How was the training presented? Were the training outcomes listed on PowerPoint slides? If not, were they explained verbally? A well-defined outcome for FERPA training would be for example, “By the end of this training you will be able to apply FERPA