Understanding What Statistics Say, Or Don’t Say, About Your Practice




Joette Derricks, CPC, CHC, CMPE, CSSGB, Vice President of Regulatory Affairs & Research, ABC

“Do not put your faith in what statistics say until you have carefully considered what they do not say.” (William W. Watt)

If seven out of ten people “strongly disagree” with a statement do you have a trend? What if seven hundred out of a thousand “strongly disagree”? Many times, we look at numbers or percentages and we reach a conclusion that there is a trend. Yet, are you sure the results are meaningful? The rule of thumb is the larger the population the more certain you can be in drawing conclusions. For example, if you look at how your practice is doing in comparison to the average and you score very high or very low the obvious conclusion is you are doing really well or really poorly in an area. However, if the number does not make sense to you, perhaps there was a problem with the data. If the population was too small, or skewed in some other manner you may be drawing poor conclusions. First, you need to verify and understand the data then you need to dive down into your processes to see what you are or are not doing right.

There is a lot of healthcare information available these days. We have key performance indicators such as net collection rate and days in AR. We have quality data and compensation statistics based on work RVUs. Everyday we look at different numbers and draw conclusions about the performance of our practices. Yet, do we really understand the numbers?

Medical Group Management Association (MGMA) is one of the premier associations for providing various performance and benchmarking data. They produce a number of survey reports with overlapping data elements, that surprisingly, may paint a different picture than another survey report. Two of MGMAs popular reports are: The Physician Compensation and Production Survey Report and the Cost Survey Report. Both contain some good information. Yet, if you tried to compare the same data element from both reports you may be surprised at what can occur.

Let’s take a look at a popular data element physician work RVUs. Let’s say the physician has 9,300 work RVUs. How does the physician compare to other physicians within his specialty?

If a physician has 9300 Work RVUs according to the MGMA Cost Survey he is at the 25 percentile.

However, the same number of work RVUs would place him between the median and the 75 percentile based on the MGMA Physician Compensation and Production.

Why the difference? Well remember the opening comments regarding the population size—over three times as many physicians responded to the MGMA Physician Compensation and Production survey. If you are in negotiations with a payer or for a new contract based on work RVUs your compensation can be significantly impacted if the one set of numbers is used instead of the others. Are there other factors that might contribute to the variance?

Sure, the fact that the physician compensation report is based on 2010 data and the cost survey is based on 2009 data may account for differences in how practices charged or captured the RVUs. Another factor that enters in is the Cost Survey includes data from all practitioners, e.g., nurse practitioners, CRNAs, whereas the Compensation Survey is based solely on physician data. In terms of collections, work RVUs and compensation data the gold standard is the MGMA Physician Compensation and Collection Survey.

So far we have been comparing data elements from two different survey reports, let’s focus on tables within just the MGMA Physician Compensation and Collection Survey and specifically compensation data. Within the report there is no relationship between various data elements in different tables. In other words, the percentiles from differU ent tables do not match up. For example, there is no relation between the 75th percentile for compensation and the 75th percentile for work RVUs within the same specialty. Therefore dividing them will not get you the 75th percentile compensation per wRVU! (see Table 1).

Conclusion

In conclusion, before you take action based on statistical data and benchmarking be sure you understand what the data says or doesn’t say. Otherwise, you might find you have drawn some inaccurate conclusions which may impact your practice.

 


Joette Derricks, CPC, CHC, CMPE, CSSGB serves as Vice President of Regulatory Affairs and Research for ABC. She has 30+ years of healthcare financial management and business experience. Knowledgeable in third-party reimbursement, coding and compliance issues, Ms. Derricks works to ensure client operations are both productive and profitable. She is a long-standing member of MGMA, HCCA, AAPC and other associations. She is also a sought-after nationally-acclaimed speaker, having presented at AHIMA, Ingenix, MGMA and HCCA national conferences. You can reach her at Joette.Derricks@AnesthesiaLLC.com.