fbpx

Determining the effectiveness of intervention practices

Sharing is caring!

Free School Child Coloring With Green Pencil (unedited) Creative Commonsphoto © 2009 D. Sharon Pruitt | more info (via: Wylio)
As was discussed in a previous post, there are several resources for evaluating which of the many practices out there are considered evidence based in the area of autism. However, what about areas of intervention that are simply too new to have a significant evidence base. How are we to evaluate those practices? Should we deny new and promising treatments to our students simply because we have wait for the scientific process to grind through the process of a researcher identifying the practice for evaluation, designing and completing a study, analyzing the results and then going through the publishing process that, itself, can take two years? Do we refuse a request for a communication strategy using an iPad, for instance, because there simply is not enough evidence at this point to support its use? This was the specific subject of an earlier post, but I want to broaden the perspective to include a variety of other interventions that are too new to have published research supporting them, such as cognitive-based interventions for social skill competence for higher functioning students. There are some strategies that we can employ to evaluate the impact of interventions with our students. Let me qualify that, however, by saying that clearly if given a choice between trying a strategy with a clear research base and one with none, we should start with one that has evidence to support it. In addition, these strategies are not provided as a way to use marginal or unproven interventions or those that have been disproved (e.g., facilitated communication) or have evidence indicating the potential for harm (e.g., holding therapy). These are strategies for assessing the effectiveness of research-based strategies for your individual student with autism and to evaluate the effectiveness of interventions, practices, and tools that are too new to have a strong evidence base. These are also strategies that should be employed even when you are using an evidence-based strategy to assure that it is effective for your situation.

First, it is important to review the literature on the evidence-based strategy (resources summarized discussed in the earlier post) to see if the one you are considering is one that has been evaluated before or is similar to one that has been evaluated before. For instance, if you are thinking of using Proloquo2go with a child on the iPad, look the evidence base for voice output devices and augmentative communication systems. The use of the iPad does not make this strategy a whole new practice. It simply puts another spin on it. Similarly, video modeling has an emerging evidence base and the use of the iPad or iPhone to do it does not change the fundamental underlying research base for the practice. Keep in mind that the iPhone and iPad are typically tools rather than practices and should be used as they are useful and put away when they are distracting.

Second, even if a practice has a strong evidence base it does not mean that it will be effective for your individual or your situation. Hence it is important to approach implementation in a methodical way to allow you to assess if the practice is indeed effective or if the behavior change occurred due to some other strategy you began at the same time. Also, take baseline data prior to the implementation of the intervention so that you have something to compare the effects to later. Finally, take consistent and objective data throughout your implementation of the practice and compare it to your baseline data. If possible develop some type of rubric or evaluation tool that a blind observer (someone who does not know what the expectation is for the intervention) can use to assess whether it is effective.

Finally analyze the data on a regular basis to determine if the strategy is needed. Regularly evaluate the data so that you are not wasting time implementing a strategy that is not working when you could replace it with something that is. In addition, ask other stakeholders in the person’s life whether they feel that the practice is worthwhile. Are the results you are seeing worth the effort that is being expended? This is typically called social validity in the scientific literature and gives you a good assessment of whether it is worth continuing the practice for this particular individual.

You can look at these strategies as being primarily good teaching—and they are. We should use them for all our interventions to determine if what we are doing is beneficial for the students. However, I see a number of educators who think if they are using a “scientifically based intervention” that they do not need to determine whether or not it works. It’s important to remember that we need to evaluate the individual’s progress based on his or her performance—and for that there is no easy answer other than data collection.  I’ve provided some resources for data collection and analysis below, but will do a longer post on that complex topic another time.

Resources

Share it:

Email
Facebook
Pinterest
Twitter

Unlock Unlimited Access to Our FREE Resource Library!

Welcome to an exclusive collection designed just for you!

Our library is packed with carefully curated printable resources and videos tailored to make your journey as a special educator or homeschooling family smoother and more productive.

Free Resource Library