Stochastic frontier analysis (SFA) have been widely used to measure efficiency within health economics and statistics (Bogetoft & Otto 2010; Olsen & Street 2008; Rosko & Mutter 2008). The primary drivers to this focus are caused by three major challenges for health care: a rising change in costs per capita, new medical treatments and changes in demography as a result of an ageing population (Porter 2009). Due to this development, efficient improving initiatives are an important focus in order to act on the future challenges for the healthcare sector.

Difference-in-difference (DiD) is a very popular methods within health economic and health policy literature in recent years (Imbens & Wooldridge 2009). The DiD design can account for changes in either patient treatment or policy decision over time (Angrist & Pischke 2010). Therefore, the design can analyze natural experiments in any given country (Imbens & Wooldridge 2009).

In this paper I present a methodological design on how to conduct stochastic frontier analysis in a difference-in-differences design, utilizing data from a trial evaluation of a natural experiment in Danish healthcare. The analysis are coded in R software.

Methods

SFA production efficiency

The SFA model for the production function in our study is written in the following way (Bogetoft & Otto 2010):

In the equation the SFA production function consists of the production function for each hospital department, where the department-specific stochastic component is a stochastic production function. This stochastic component consists of statistical noise and measurement errors. An important assumption of SFA is that and have different distributions. Thereafter, it is possible to model the SFA model by OLS. Another way to model the SFA is through ML by further distribution assumptions for and (Bogetoft & Otto 2010). It is assumed in the ML estimator in relation to the distributions of and that follows a normal distribution, and that follows a half-normal distribution. It means that the probability density function of each is a truncated version of a normal random variable with a mean value of 0 and a variance of (Bogetoft & Otto 2010).

Applying an SFA production function, the log-likelihood function is given by (Bogetoft & Otto 2010):

The result of the maximization of the log-likelihood is consistent ML estimators with regard to the parameters σ, and λ (Bogetoft & Otto 2010). The expected value of inefficiency is given by:

To arrive at an estimate for the hospital department-specific technical efficiency the inefficiency can be substituted into the above function.

Difference-in-differences design with SFA & bootstrapping

In this section the difference-in-difference design for the SFA model will be explained. The DiD design is structured, so is fits the data structure of the SFA analysis. By doing this, it is relevant afterwards to validate the result by using bootstrapping methods. In this process the estimates are being tested for statistical significant effects.

In the first part of the analysis, the SFA estimates are calculated, using the method described in the above section. In order to find the average treatment effect, before and after the intervention, the estimates are divided in a before category and an after category. Thereafter, the estimates are divided in a treatment and control category. This creates four different categories of the SFA estimates, depicted in table I:

Table I — Difference-in-differences design with SFA

As the table shows, the difference estimates are first calculated as the difference between the pre-intervention minus the post-intervention of the treatment group. Thereafter, the second difference estimate is calculated as the pre-intervention minus the post-intervention of the control group. These two difference estimates forms the DiD design by subtracting the two estimates. This creates the DiD formula: (D-C) — (B-A).

This scenario can be described with algebra in the following way. First, consider the above described SFA model:

It is possible to rearrange the above expression and extract the difference-in-differences equation for SFA:

This gives us the difference-in-differences estimates, by subtracting the above calculations:

The above regression coefficient is the derived difference-in-differences estimator that captures the average treatment effect of the analysis.

After this calculation, it is relevant to estimate if the effects are significant when the results are randomly sampled in a metrical test. In order to perform this test, a bootstrapping method is applied. This method is able to calculate confidence intervals of an effect measurement like SFA (Briggs et al. 1997). In order to apply bootstrap to SFA, consider the SFA estimate:

Due to the central limit theorem, it is known that the SFA samples mean estimation will be asymptotically normal distributed (Briggs et al. 1997). Therefore it is possible to make a random sample with replacement of the SFA estimate in order to obtain a confidence interval for the estimate. In the study, a sample of 1000 random samples with replacement where chosen. By utilizing this newly generated data, a standard deviation from the sample is calculated:

The logic behind using this calculated standard deviation for calculating a confidence interval are, that the measure of precision are derived from a statistics sample distribution. Because the SFA estimate is asymptotically normal distributed, it is possible to assume that the observed distribution is a good estimate for the underlying population distribution. Because the number of observations in the original data where SFA where calculated is high and because the number of repetitions in the bootstrap method is high, the accuracy of the bootstrap method is also high (Briggs et al. 1997).

Methodological comments

The focus of difference-in-differences method has traditionally been combined with regression models that describe how the average performance depends on a series of covariates. The DiD estimate in such applications allows us to estimate how the patient is affected by an organizational change. By using a frontier approach like stochastic frontier analysis, we can get insight into how an organizational change affects best practice as well as the spread in performance. It is possible for example that a merger may not improve best practice without affecting the average practice and it is possible that an organizational change may have no impact on the best practice outcomes but that it may reduce the spread in performance. Changes in best practice corresponds in the SFA case to a change in the functional form while a reducing in performance spread corresponds to the efficiency scores increasing.

Context & data

In order to investigate the research analysis we focus on a reform made in the Danish healthcare sector in October 2011. The reasons to this implementation were a higher demand for healthcare and at the same time fewer resources to utilize. Therefore it was necessary to implement a cost-cutting solution in order to overcome the job of keeping the budget while performing. One of the tools applied to reduce costs was the merging of hospital departments and one of these departments was the urological department. The merging decision resulted in that all urological activity was moved from a regional hospital department to a university hospital department in order to obtain economies of scale.

We use this merging decision as the treatment hospital departments in our difference-in-differences analysis. As control units, we use two regional hospital departments from Central Regional Denmark, where no merging decision has been made during the period.

We use a dataset based upon variables from Danish clinical and health economic databases. Data are collected from the time span of 2010–2012. We calculate the SFA efficiency using length of stay as a measure for efficiency. Thereafter, we use average DRG costs as independent variable. Furthermore, we apply sex, age category and readmissions as covariate controls.

Results

The below table shows the results for our stochastic frontier analysis:

The next table shows the results of the SFA efficiency analysis on group level:

As the above result shows, there is some variation in the mean of the efficiency measure within the four different groups. As the last measure shows, the aggregate DiD drops with -1 %, before and after the merging intervention is decided. In order to test the sensitivity of this result, the below table shows the result of a bootstrap simulation of the aggregate DiD LOS measure, with 1000 repetitions:

As the above bootstrap model shows, the 95% confidence interval indicates that the efficiency value is a negative effect, when the aggregate DiD result is simulated with 1000 repetitions. The sensitivity result of the analysis indicates therefore, that there is a negative effect of the reform.

Conclusion

The purpose and contributions of this study was twofold. First, our study introduces a new methodological approach by applying stochastic frontier analysis to a difference-in-differences design. This methodological design makes it possible for future researchers to investigate efficiency changes and policy implications in any given program evaluation decision.

Furthermore, we utilized this new methodological approach to investigate the policy implications and changes in efficiency, when the urological speciality of Danish healthcare was merged. We found a slightly negative change in efficiency due to the merging decision.

The combination of SFA with a DiD design is useful for further research study into efficiency of a policy intervention, from before- and after the program is introduced.

References

Angrist, J.D. & Pischke, J.-S., 2010. The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics. Journal of Economic Perspectives, 24(2), pp.3–30. Available at: http://www.aeaweb.org/jep/index.php%5Cnhttp://pubs.aeaweb.org/doi/abs/10.1257/jep.24.2.3.

Bogetoft, P. & Otto, L., 2010. Benchmarking with Dea, Sfa, and R, Springer Science & Business Media.

Briggs, A.H., Wonderling, D.E. & Mooney, C.Z., 1997. Pulling cost-effectiveness analysis up by its bootstraps: A non-parametric approach to confidence interval estimation. Health economics, 6(4), pp.327–340.

Imbens, G.W. & Wooldridge, J.M., 2009. Recent Developments in the Econometrics of Program Evaluation. Journal of Economic Literature, 47(1), pp.5–86.

Olsen, K.R. & Street, A., 2008. The analysis of efficiency among a small number of organisations: How inferences can be improved by exploiting patient-level data. Health Economics, 17(6), pp.671–681.

Porter, M.E., 2009. A Strategy for Health Care Reform — Toward a Value-Based System. The New england journal of Medicine, pp.109–112.

Rosko, M.D. & Mutter, R.L., 2008. Stochastic frontier analysis of hospital inefficiency: a review of empirical issues and an assessment of robustness. Medical Care Research and Review, 65(2), pp.131–166.

--

--