A common struggle for educators is helping learners respond successfully over time. Going back to the post on knowing your learners, one of the stages of learning discussed was maintenance. This is the ability of the learner to respond correctly over time after the content has been taught. Here are two examples:
- the learner is able to solve 2 digit by 2 digit multiplication problems when you have moved on to2 digit division
- the learner is able to compose a persuasive essay (while using a word prediction software) when you have moved on to creative writing
Helping learners maintain content over time is essential to their long-term success. There are many different ways to target the maintenance stage of learning. Once practical strategy is Bell ringers (aka warm-ups) and Flashbacks. Both of these are short samples of work covering prior content that is to be completed independently by the learner. Most commonly, these are problems that teachers have students work at the beginning of the new class period (bell ringers/warm ups) or during times of transition (flashbacks). Here are some tips to guide your bell ringer/flashback use.
- Only include content/skills that students have demonstrated mastery on already. If not, then you are not targeting maintenance of skill (because there was never a skill to maintain) and you significantly increase the chance that you will have students engage in off-task and/or problematic behavior.
- Start with where your learners are (even if it means going to a prior grade’s content) and cycle through newly learned content as time progresses. Being intentional will ensure that material taught in September will continue to be included in bell ringers in November, February, and May.
- Ensure students receive feedback on their work. One big mistake with this activity is that students complete them with no feedback provided. This is extremely dangerous as student errors can be allowed to persist over long periods of time. You can grade and return, or use a digital submission system (like a student response system).
- Make it fun! You can set class-wide goals for completion and/or accuracy in addition to individual student goals. Be sure to celebrate learners success when it occurs.
Let us know how it goes! Comment below with any tips for using bell ringers/flashbacks that you have to share.
Awhile back I made a very short post titled AT vs. IT. Basically I was just commenting on how it is odd that assistive technology for some students is instructional technology for other students and vice versa. Sometimes I wish we could just call it all technology and stop the debate.
The purpose of this post is to expand on that previous post and share some information I received last year when listening to Dave Edyburn speak at the Council for Exceptional Children conference in Nashville, TN. He brought up a couple of points that I believe are worth repeating here. The first is that there is very little data showing that assistive technology works. He said, which I believe to be very true, that the primary data we have on AT are the receipts showing how much we spent on it. There are a number of reasons the data is hard to find. One of which is that technology changes so fast that by the time a study is ran and published the technology either no longer exists or has been updated with even more features that weren’t originally available. I believe there is much more research that could be done in this area however and appreciate the work that Mr. Edyburn does each year to disseminate information on what research does exist.
The second thing I wanted to repeat from that session (and main reason for this post) was the way Instructional and Assistive Technology were defined through the use of data. Often in conducting research we use a withdrawal design (we will have a post on studies we ran using this design soon). The basic idea of a withdrawal design is that you take data with and without a support over time to determine if it is making a difference. For example, if a student types on a computer with no support, then uses word prediction software while typing does he or she do better? To know for sure, we would collect baseline data with no word prediction for a set number of times, then take data with the support, without the support, and again with the support. The longer you continue this pattern the better the data. The idea is to rule out the student doing better or worse due to a one time circumstance (sick, content that is more difficult than usual, etc…).
Back to the difference between AT and IT… Going by what was presented in the session, if a student always does poorly without a support, but always performs well with a support then that is assistive technology – the student must have it to perform well. It is helping them to overcome a barrier. However, if the student does poorly without the support to start with, then gradually gets better over time after having used the support, that is instructional technology – the technology actually helped the student to be able to perform the task independently over time.
To date this is the best way I’ve heard the difference between AT an IT defined. And what I like most is that it requires data collection!
I think back to when I started out as an educator. I had a lot of book knowledge and little experience. What I learned really quick is that there are numerous times that I missed the connection between what I knew was right and how to actually implement the strategy or technology with a particular student. For example, time trials. This is a great strategy targeting fluency, but implementing this with a reluctant learner can be difficult if you are unsure how to proceed. This is where being a team mate, supervisor, or educational leader is critical in helping all educators move forward in effectiveness. Here’s an idea to consider: follow the direct instruction process with our staff in the same way that we do with our learners in the classroom.
1. I do
When helping your staff or teammates move forward, model exactly what they are to implement. I actually begin by task analyzing the strategy and providing them a copy before I demonstrate. It is one thing to tell you what you should do (especially in a workshop), but it is at a different level when you can show me with one of my students in the classroom. In addition to demonstrating the skill correctly, it verifies in the mind of the staff member that the strategy/technology can actually be implemented with “this” child. As a consultant, some of the biggest movement I have facilitated with struggling educators is having them see the “stuff” I’m training them on in action in their classroom or building.
2. We do
After the educator has seen sufficient examples modeled for them (through a variety of ways), it is now their turn. Knowing the positive effect of errorless learning, it is key to not watch the educator repeatedly fail. Instead, I coach them through the implementation of the strategy/technology (using system of least prompts — more on that in a later post). This ensures success with minimal (or no error).
3. You do
After the educator demonstrates proficiency during the prompted practice, you allow the educator to implement the strategy without your assistance. It is critical to know that feedback is still required here, you just deliver it following the implementation.
I know this seems like a laborious process when you break it down into these steps. Don’t be afraid to leverage technology to allow you to frequent many classrooms without driving to provide feedback (see one of our studies here). However, intensive work with one educator will result in years of effective work with thousands of intensive learners.