For Module C, Lesson 3, I opted to post a suggested IOS/Android app for embodied learning, as well as discussing the nature of participatory simulations and handheld devices.
I have downloaded and tried out the app Accelerometer Recorder for the Android OS. This app, written by Middlemind Games, records the acceleration measured in your android device and saves it to a csv file. The acceleration is recorded in 3 separate vectors, x y and z. This is a very useful app for physics students in the lower mainland that travel to the PNE for Amusement Park Physics. Here is a link to a educational resource package for amusement park physics at the PNE. Of course, any amusement park or ride can be used, and there are lots of different resources around for teachers and students alike. Having a digital device that accurately tracks acceleration is a powerful tool for learning. I can’t think of a better example that captures the idea of embeddedness, coupling, and adaptation (Winn, 2003). I believe that the ability to track movement and to see and realize a physics property definitely leads to a heightened presence for students. Although in the past while doing amusement park physics trips the students are truly not focused on physics, I have no doubt that with an accelerometer recorder, the students would at least give some thought to the device and information that it provides. Furthermore, the real-time and accurate feedback may also tie in to giving the student an adaptation that results in a more concrete understanding of acceleration. While I am aware of studies done on amusement park physics, I’m not currently aware of any research that ties learning to the specific use of digital accelerometers in this environment, and it would be an interesting topic to investigate.
Accelerometer Recorder costs $1.39 (cdn) and is available at your local Google Play Store. You may also like to have a file browser app in order to view and copy/send your csv file to a computer. I use ASTRO File Manager from Metago, and it is free.
Describe the nature of the activities that may have been central, in your opinion, to the learning experiences described in the papers you read.
Roschelle, Penuel, Yarnall, Shechtman, & Tatar (2005) describe the now well-researched area of assessment for learning in science courses, what are some of the ill effects of today’s assessment practices, and how handheld mobile devices may help with assessment.
Roschelle et al. start with an overview of assessment in schools and the effect that positive assessment can have. Much of this information has been highlighted by Black and Wiliam (1998) in their seminal meta-study on assessment. The following are some of the key issues surrounding assessment in science:
- Teachers’ assessment rarely coincide with their instructional goals
- Teachers often do correlate student performance with learning accuracy
- Teachers often do not have adequate time to plan proper assessment
Even if a teacher intends to follow through with assessment reform, they are often not successful. As such, the above nature of assessment led directly to Roschelle et al.’s study on handheld assessment devices. Note that the question for this post is about “learning” whereas the research is about assessment. In fact, as commonly noted (Black, Harrison, Lee, Marshall, & Wiliam, 2004; Deddeh, Main, & Fulkerson, 2010; Gibbs & Simpson, 2004), assessment is inseparable from learning. Roschelle et al. use the term “informate” to make this distinction. The assessment informs the learning process, and handheld devices help this process. Informate is also used to draw attention to the need for informing, rather than automating, assessment.
To accomplish their goal to informate assessment, Roschelle et al. developed 5 guidelines to apply:
- Design for teachers that are “in transition”
- Exploit the unique affordances of handheld devices as compared to paper and pen
- The handheld assessment should represent significant assessment, not trivial tasks
- The handheld assessment should be a simple addition to pre-existing inquiry tasks
- Design the assessment around social activities
Roschelle et al. found only a few handheld devices that meet their guideline requirements. One was Classtalk, an automated response system used for Peer Instruction (Crouch & Mazur, 2001; Suppapittayaporn, Emarat, & Arayathanitkul, 2010) and another was Sketchy, a handheld sketching unit.
Sketchy really caught my attention because it seems to address issues that I’ve thought about quite often during formative assessment. In particular, I notice that in math and science instruction, it is the norm (and expected during pre-service teacher training) for teachers to ask the class questions. Quite often, verbal explanations are difficult to convey while drawing pictures, graphs and equations is a much more relevant method for communication. I have attempted in the past to bring this type of formative assessment into the classroom using wireless tablet (not tablet PC) technology for the students to use. It should be noted that my attempts did not go well because the tablet was awkward to use without practice and I only had one. Sketchy on the other hand, showed to be relatively successful in achieving the desired guidelines set out by Roschelle et al.:
Sketchy makes unique use of the handheld representational medium; drawing sketches is much easier with a stylus than with a mouse, and the computer makes it easy to produce animations of processes that occur over time – a big advantage over paper. (Roschelle et al., 2005, p. 200)
As the area of analytics grows, I would expect to see more progress in the area of informate assessment technologies. Whether these new tools will be mobile feedback devices which inform with in-depth feedback, or whether they are used more passively as demonstrated by the new Khan Academy classroom assessment model, remains to be seen.
I may not have been completely explicit in the above, with my answer to the question of what is the nature of the activity that is central to the learning experience is simply the formative feedback that the devices provide which help address the assessment issues I discussed.
Response to Tom Whyte
Tom responded to my original post above, and I really liked his thoughts and questions. I think it is worthwhile extending this conversation to my eFolio, as not only does the topic interest me but I think it is very important.
Great question Tom.
Black and Wiliam (2004) emphasize how important professional collaboration is with assessment for learning. I believe that Wiliam pegged the number at 70min per month of collaborative team is required for effective AFL. This was based on his visits to schools across the USA, where his experiences mirror yours: despite having knowledge and intent in implementing AFL, teachers have a very tough time with it.
I personally also feel that marks and grades get in the way of formative assessment. It doesn’t have to, but in reality I think it does. If students chase marks, assessments become unauthentic. Furthermore, if marking reduces work to subtracting marks (ie. you lose 1/2 mark for missing a unit), then the bigger picture that assessment provides can get lost in the noise.
I have a student teacher this year, and I definitely see how their training is very much geared towards the nuts and bolts of teaching, and much less on more holistic aspects. So as long as AFL is NOT seen as the nuts and bolts of education, we can continue to expect new teachers to not be immersed or even moderately versed in this aspect of education and learning.
Conceptual Change is not just a constructivist learning theory for students, it also applies to teachers (Gregoire, 2003). So as long as teachers feel comfortable and familiar with their current practices, it is unlikely that there will be a major shift in focus. Take the analogy of Newton’s Third Law and CCM. If students are not exposed to a discrepent event, or dissonance, and they have no emotional reason to change their views, then the student is not likely to change their mind that a truck hits a fly with a lot more force than what the fly hits a truck with. The same goes for teachers and assessment.
Bringing it back to Lesson 3, this is a very neat and optimistic aspect of assessment with handheld devices. In many ways the device forces pedagogy in a box. For example, Sketchy brings forward assessment practice that doesn’t have a history of marks and grades. It is what it is, and it informs learning. The same goes for clickers and peer instruction. Granted, they can be used improperly but at least there is little to no history to provide a teacher with misconceptions on what these tools can be used for.
Responses to My Classmates
I had a reasonably comprehensive review of most of the posts provided by my classmates. A couple of the topics really caught my attention, and I’m providing my responses here.
Gestures Improve Learning
Steph Tobin posted a discussion on incorporating technology that allows for bring gestures into learning in the classroom. I picked on one aspect of Steph’s post, which asked about how do know if it’s worth incorporating a new technology.
“Again, one wonders if it was the technology or the teaching methods that made a difference here. The study did not elaborate on instructional methods. Should we use technology even if it doesn’t affect learning outcomes compared to traditional methods?”
I think it is important to keep some perspective when rationalizing practice around research and studies. While we can do our best and use a Solomon Four-Group study design, etc, there are still many variables that people cannot control for. It reminds me of a quote from Peter Liljedahl (math prof at SFU),
“… the popularity of RCTs in the early part of the century came from the agriculture model of research – this plant gets water, this plant gets light, this plant gets water and light, etc. In the late 60’s researchers began to realize that kids are a lot more like people than like plants.”
In other words, I think you touch on two very important topics. Peter Liljedahl also said,
“In fact, one recent study showed that replacing a good mathematics teacher with a mid-range mathematics teacher had the same effect on performance as increasing the class size from 30 to 60 students. The teacher is the biggest difference maker. Nothing compares – not the curriculum, not the class size ranges, not the available resources, not the education of the parents, not the socio-economic status of the student – NOTHING.”
So I would say that absolutely, yes, we should feel comfortable in using technology even if it doesn’t affect learning outcomes, for the simple reason that the technology may positively affect the learning outcomes in YOUR classroom. Certainly research informs us and tells us what the best path forward may be, and we should consider this when implementing a new strategy or tool such as gesture tools. And of course we need to rationalize our decisions as educators.
I’d also like to give some personal context around my statement above. I’m not a wishy-washy, let’s try anything you want kind of person. I’m originally an engineer by training and spent a decade working in a tightly controlled environment where validation of r&d was completely required before using the technology. So I appreciate that aspect of the topic as well.
Why Not Role Play in Science and Mathematics Classrooms?
Valerie Wells explored the topic of embodied learning in role playing, and bringing this environment into science and math classrooms. While there was some discussion on whether or not role play is good, bad or indifferent, I was interested in finding out more on exactly what technologies or methodologies could be used for this.
Do you have any guesses, ideas or hypothesis on where this part of the mobile technology may grow? I find that this topic, like some others, gives us just a small taste of what is perhaps possible yet seems quite distant. Jasper is another example, and there are many others I think.
My hunch is that we are at least one or two breakthroughs away from using mobile devices for role playing. Maybe there will ultimately be some kind of Nintendo DS type of device, where you get to program a character or characteristic into the device, and when other people with devices come close to you, the devices share information. Perhaps something like this could enhance CSI type science labs, or maybe students can take roles in math lesson plays.
Are there other ideas out there?
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the Black Box: Assessment for Learning in the Classroom. Phi Delta Kappan, 86(1), 8.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7.
Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977.
Deddeh, H., Main, E., & Fulkerson, S. R. (2010). Eight Steps to Meaningful Grading. Phi Delta Kappan, 91(7), 53–58.
Gibbs, G., & Simpson, C. (2004). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, (1), 29.
Gregoire, M. (2003). Is it a challenge or a threat? a dual-process model of teachers’ cognition and appraisal processes during conceptual change. Educational Psychology Review, 15(2), 147–179. doi:10.1023/A:1023477131081
Roschelle, J., Penuel, W. R., Yarnall, L., Shechtman, N., & Tatar, D. (2005). Handheld tools that “Informate” assessment of student learning in Science: a requirements analysis. Journal of Computer Assisted Learning, 21(3), 190–203.
Suppapittayaporn, D., Emarat, N., & Arayathanitkul, K. (2010). The Effectiveness of Peer Instruction and Structured Inquiry on Conceptual Understanding of Force and Motion: A Case Study from Thailand. Research in Science & Technological Education, 28(1), 63 – 79.
Winn, W. (2003). Learning in artificial environments: Embodiment, embeddedness, and dynamic adaptation. Technology, Instruction, Cognition and Learning, 1(1), 87–114. Full-text document retrieved on March 23, 2012, from: http://www.hitl.washington.edu/people/tfurness/courses/inde543/READINGS-03/WINN/winnpaper2.pdf