For those of you hanging in there with me on this blog, thank you. I know that outcomes are not always the most riveting subject (though I find it quite interesting). If you are thinking she is a bit of a “nerd”, relax! This would not be the first time in my life that someone has called me a “nerd”. Now that I have grown up a bit, I have finally embraced that facet of my personality. Those who know me, know that I try to hide it, but sometimes it just spills out. In this blog about data, it is on full display.
I want to talk about two things that are important to making the results of your data analysis meaningful. After you have gathered your resources, found a champion, selected those psychometrically normed measures, collected the data, and found someone to manage it, then you are ready to realize the fruits of your labor. You have heard me say in almost all of my blog posts that technology is your friend. In the analysis of data and the reporting of results, I would move technology into the codependent marriage category. It is not just your friend; you really have to have it.
The first thing that an adequate technology platform can do for you is to determine the statistical power in your data analysis. Statistical power by definition in most textbooks is the probability of detecting an effect if there is one present (or, in textbook language, the likelihood of correctly rejecting the null hypothesis). With all the work that we put into outcomes research, we want to make sure we see the impact of our treatment interventions if/when they are present. Some effects are large and some are small, but if they meet the level of significance designated by our analysis models, they are meaningful. Detecting an effect is important because we can augment interventions to extinguish or enhance outcomes so long as we know it is there.
One of the easiest ways to make sure our analyses have power is to make sure our sample size is large enough. Collecting enough data takes time. Just because one individual with stimulant use disorder stays sober after undergoing treatment at BRC does not mean that our treatment interventions are effective for people who use methamphetamine. Rather, we have to treat many individuals who use methamphetamine, have them complete our normed measures, and analyze the data to see if our interventions are effective. Effectiveness can then be determined when the statistical analysis shows a treatment effect AND the measure of statistical power meets the proper threshold.
The second thing that technology can help with is benchmarking. Statistical power can help us to compare effects within our constellation of clients, but benchmarking can help us compare outside of our facilities. This is where things get a little complicated sometimes. Often, there is competition between treatment providers and truthfully, we are not always willing to work together to learn from each other for fear of losing patient revenue. I could say something to the effect of “Get over it!” but that is not helpful nor realistic. The politics of business are real and many of us are managing client outcomes, payrolls, and profitability. In a free market, competition should keep us all working to be better.
You may be asking, “How does technology help with the benchmarking of data when you are up against the politics of business?” Here is the answer. A few quality independent companies are working in the data analytics space that can provide benchmarking opportunities for your company. These independent companies can take your data and compare it to the de-identified data of other facilities in such a way that you do not have to step on anyone’s toes.
Benchmarking is critical and is not a widely accepted practice in our industry. However, this must be changed if the industry is to adapt and improve the ways in which we treat our clients. The days of one-size-fits all treatment are over. The client mix is too acute and too complex not to take a hard look at what is and is not working in your facility. The best ways to do this are to look at within treatment comparisons (with appropriate statistical power) and between treatment comparisons (with de-identified benchmarking sets).