SBG for 2017 - A New Plan

As It Stands

After two  months of I decided to return to my previous system of SBG objectives. Read on to see what I had tried but ultimately didn’t continue with.  This year was once again fairly successful with SBG. I managed to work the Transfer Tasks into my system OK, which made me feel better about students that get all “mastered” on their learning objectives. However, I’m still not satisfied with how this works out. I feel that I have a couple too many kids nipping at the 95% range where their learning portfolios make it look like they truly are masters at their learning objectives. It’s mostly ok, but a few of these students are not able to successfully complete more challenging questions. That’s what the transfer tasks are there for, but the whole system seems a bit clunky.  As for my 3 point scale of beginning/developing/mastered, I think this is still valid because of it’s clarity in reporting on whether a student reaches the learning objective or not.

4 Point Scale

To reacquaint myself with other SBG ideas, I went through my Robert Marzano book, Formative Assessment and Standards Based Learning, again.  I was specifically looking for information on 4-point rubric scales and from my reading I re-thought what it means to have a standard.  Instead of having a higher number of standards where I grade on nothing/something/mastered, I considered what would happen if I changed the level of question being asked to become the different standard, as seen in the table below.

Score Assessment Standard
4.0 More complex content
3.5 Can do level 3.0 and some of level 4.0
3.0 Target learning goal
2.5 Can do level 2.0 and some of level 3.0
2.0 Simpler content
1.0 Partial success of simpler content, with help
0 No success

An entry level question would be really basic, a developing question would be “normal” and a mastery question would be challenging.  By increasing the number of levels in the rubric to 4, it should be safe to maintain a high standard for the highest level while allowing students to reach up to 95%, as shown in the table below.

Scale Score Percentage Score
4.0 100
3.5 95
3.0 90
2.5 80
2.0 70
1.0 60
below 1.0 I / 50

Comparison

This scheme accomplishes much of what I’m doing now. Getting all the basics gets you to 90%.  From there up, a student has to start being successful in challenging questions.  Hopefully students at my school would buy into the idea that 95-100% truly means exceeds expectations and not just meet them.

Here is a link to a proposed set of learning objectives and standards for physics 11 (mechanics only) that I put together. Below I’ve included a screenshot of a couple of the learning objectives.

LOBGrubric

The Good

This scheme still acts as binary marking which I see as a good thing.  At any given level, the objective is either correct or it isn’t.  So if I give 3 questions for a level 2.0 assessment, they all have to be basically be correct in order to say that a student was successful at that level.

The Question

One problem I can see arising is how I choose questions for each level. Will I be able to consistently rate a question according to the rubric I make?  A bigger problem I see comes in the day-to-day functioning.  Specifically, how would I determine who gets to do a Level 4assessment, and when do they do it? Surely someone who is at Level 2.0 doesn’t need to do a Level 4.0 question. However, what if they did and they were successful? Does this mean they get a score of 4 for that learning objective? I don’t think it should but I can see how this could get confusing.

I would love to read your comments on this idea!