I’ll be teaching both physics 11 and physics 12 again next year and I’m interested in doing more performance task assessments than I did this year. I had intended on using more goal-less problems in ph11 this year but I always felt like I was under the gun for time constraints. As for ph12 next year, my co-worker is interested in sharing some common assessment strategies between his classes and mine.
Ok, I’m going to say it. I’m done with field trips. Not quite because I don’t think they have value but because of the downstream consequences of them. You see, I hate it when students miss class. I think missing a class is generally really bad, worse than not doing an assignment or homework. I can think of 4 different field trips I’ve taken students on. The classic trip is taking physics 11 to PlayLand, along with taking science 8 to playland.
I’ve been asking many of my grade 11 students what their ideal math lesson would look like. Not in terms of content, but in terms of process. I wanted to focus this question on math instead of science because I didn’t want to confound typical learning activities with demonstrations and experiments. Most of the students cited very similar ideas, as follows: take up questions about homework or last day’s work connect the new material to what they were working on last day possibly give some notes give (lots of) examples have them try some practice questions #1 above was universal, all students started with this.
One thing I’ve been trying to implement more and more into my units are Performance Tasks. McTighe and Wiggins in their Understanding by Design framework say that a Performance Task is an authentic assessment where students demonstrate the desired understandings. In my context, I currently use small SBG quizzes for the bulk of my assessments. Jay McTighe, who I had the pleasure and privilege of having lunch with, would probably call my quizzes “supplementary” evidence.
My classes just finished doing a conservation of momentum lab. In many ways it was a big disappointment. We ended up spending 2-1/2 classes on the lab, with little to show for it. The general idea was to record position and time data from 6 videos (6 different types of collisions), calculate velocities and momentum, and compare total momentum before and after. There were lots of problems with this: Students would make mistakes in recording data or making a calculation, and every mistake helps to obscure the goal of seeing that total momentum doesn’t change.
Lately I’ve been thinking a lot about Modeling Instruction (MI) and Cognitive Load Theory (CLT). I started this post a couple of weeks ago and then was further inspired by a post by Brian Frank (if you read both posts you’ll see some similarities). In my head I know that I want to compare them, but that is something that I shouldn’t really do because MI is a teaching and learning methodology while CLT is a theory about how people learn.
I came across a paper on Piaget cognitive levels and learning in physics. There were lots of interesting things to think about from this paper but one thing in particular caught my attention. The concept behind this paper is that people go through stages of cognitive development. In high school we typically get students that arrive with concrete operational thinking, and they hopefully leave as formal operational thinkers. The following two math problems are good ok examples for comparing concrete to formal.
So we’re just over 1/3 of the way through Physics 11. Despite my intentions to ward off predictable problems, they nevertheless continue to appear. One of them is Hooke’s Law. My students did a lab to see the relationship between force and how a spring reacts to the applied force. They could graph the relationship and figured out that the slope was the stiffness of the spring. Most students also go an equation of the line from their graphs and I helped this process along by generalizing the equation to Fs = kx, aka “Hooke’s Law.
A few weeks ago in Physics 11 I decided to return to my previous SBG scheme. I had started the year using a level scale. The reason was because I wanted to give specific difficulty targets for learning objectives. However, this method turned out to be too confusing, especially for the students. I had the feeling that my students really had only a small understanding of what or how they were assessing their work.
I’m really kicking myself for not getting this ready before the Catalyst conference in Richmond. I’ve created a form where attendees, or anyone else interest in SBG, can input their name, email address and whether or not they want to be added to a list of SBG users. What I/we actually do with this list is a whole other question… Maybe I can sort names and just let people know who else is doing SBG in their district.