Jesdeep Bassi, Francis Lau, Measuring value for money
Jesdeep Bassi, Francis Lau, Measuring value for money
Instruction
Study Group #3 Discussion Learning Objectives: Students will lead discussions and provide insight on journal articles and/or HIMSS webinars or videos on current and relevant topics in the industry. Assignments: Read: Jesdeep Bassi, Francis Lau, Measuring value for money: a scoping review on economic evaluation of health information systems, Journal of the American Medical Informatics Association, Volume 20, Issue 4, July 2013, Pages 792–801, https://doi.org/10.1136/amiajnl-2012-001422 Measuring value for money a scoping review on economic evaluation of health information systems.pdf Watch: HIMSSCAST: What the future holds for telehealth reimbursement https://himsstv.brightcovegallery.com/detail/video/6270362042001/himsscast:-what-the-future-holds-for-telehealth-reimbursement?autoStart=true&q=financing Complete and Upload into Discussion Forum: 2 page executive summary, group discussion question, list of website resources.
Measuring value for money
Measuring value for money: a scoping review on economic evaluation of health information systems Jesdeep Bassi, Francis Lau ▸ Additional material is published online only. To view please visit the journal online (http://dx.doi.org/10.1136/ amiajnl-2012-001422). School of Health Information Science, University of Victoria, Victoria, British Columbia, Canada Correspondence to Jesdeep Bassi, School of Health Information Science, University of Victoria, PO Box 3050 STN CSC, Victoria, BC, Canada V8W 3P5; jbassi@uvic.ca Received 16 October 2012 Revised 21 January 2013 Accepted 26 January 2013 Published Online First 15 February 2013 To cite: Bassi J, Lau F. J Am Med Inform Assoc 2013;20:792–801. ABSTRACT Objective To explore how key components of economic evaluations have been included in evaluations of health information systems (HIS), to determine the state of knowledge on value for money for HIS, and provide guidance for future evaluations. Materials and methods We searched databases, previously collected papers, and references for relevant papers published from January 2000 to June 2012. For selection, papers had to: be a primary study; involve a computerized system for health information processing, decision support, or management reporting; and include an economic evaluation. Data on study design and economic evaluation methods were extracted and analyzed. Results Forty-two papers were selected and 33 were deemed high quality (scores ≥8/10) for further analysis. These included 12 economic analyses, five input cost analyses, and 16 cost-related outcome analyses. For HIS types, there were seven primary care electronic medical records, six computerized provider order entry systems, five medication management systems, five immunization information systems, four institutional information systems, three disease management systems, two clinical documentation systems, and one health information exchange network. In terms of value for money, 23 papers reported positive findings, eight were inconclusive, and two were negative. Conclusions We found a wide range of economic evaluation papers that were based on different assumptions, methods, and metrics. There is some evidence of value for money in selected healthcare organizations and HIS types. However, caution is needed when generalizing these findings. Better reporting of economic evaluation studies is needed to compare findings and build on the existing evidence base we identified. INTRODUCTION Increasingly, health information systems (HIS) are being adopted across healthcare settings. However, they require significant upfront and ongoing investments. With many healthcare organizations experiencing financial pressures, justification for HIS adoption is becoming a necessity.1 This presents two challenges. First, the system has to provide demonstrated value to the organization, but the concept of ‘value’ is somewhat elusive when it comes to HIS. A recent report by researchers on primary healthcare electronic medical records (EMR) found that the chief gap in knowledge and research in Canada pertains to the value of EMR.2 The second challenge is that value needs to be considered in relation to investment in the system to determine if it is worth the cost. The 2010 overview of federal and provincial audit reports on electronic health records (EHR) in Canada states the need for information to determine the value for investments made so far.3 In looking specifically at information technology (IT)-enabled diabetes management, Adler-Milstein et al4 cited a lack of published literature on costs and benefits. Further, a systematic review on health IT by Chaudhry et al5 found limited data on costs and little information available for stakeholders to judge the financial effects of adoption. Goldzweig et al6 echoed similar findings in their review. They attributed this in part to the difficulty of conducting the analyses. Given this gap in evidence for evaluating the costs and value of HIS, we conducted a scoping review to identify and examine studies that have included an economic aspect to HIS evaluation. A scoping review follows a similar methodology as a systematic review but differs in that it seeks to determine what literature exists on a topic and to identify gaps rather than synthesizing the evidence to answer a specific clinical question. This review offers three contributions by addressing the following questions: 1. What are the key components of an economic analysis and how have they been included and reported in past HIS economic evaluation studies? 2. What is the current state of knowledge on value for money in HIS economic evaluation studies? 3. What guidance for conducting economic evaluations can be provided from high quality HIS economic evaluation studies identified through this review? METHODS Search strategy We searched English language papers indexed in MEDLINE and Business Source Premier for relevant papers published between January 2000 and June 2012. The search strategy used a combination of text words in the title and abstract, Medical Subject Headings (MeSH), and subheadings/qualifiers. A broad set of search terms were used to maximize sensitivity. (The complete queries are available in online supplementary appendix 1.) We also hand-searched previously collected papers kept by the review team and performed reference mining for additional papers. Selection Three inclusion criteria were used for selection: the paper had to (1) be a primary study; (2) involve a computerized system for health information Open Access Scan to access more free content 792 Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 management, decision support, or management reporting; and (3) include an economic evaluation. Consistent with our research mandate to focus on HIS for providers, we excluded systems that were: (1) telemedicine/telehealth applications; (2) digital devices and specialized systems (eg, imaging); (3) used by patients; and (4) used for education. For studies that were described by more than one publication with the same data, only the most recent publication was included. One reviewer screened all titles and abstracts of references captured by the search strategy. The final selection of studies was carried out through review of full text and consensus. Quality assessment The methodological quality of included papers considered both the overall study quality and items relevant to economic evaluations. We developed 10 quality criteria based on a comparison of quality checklist items from four sources on economic research.7–10 The quality assessment criteria and process details are available in online supplementary appendix 2. At the end of the process, a single overall quality score from 0 to 10 was assigned to each paper. We identified higher quality papers as those with scores between 8 and 10. Data extraction The general characteristics of each paper including publication year, country, setting, sample, and data sources were extracted. We also created a categorization of HIS types based on two sources5 11 (see online supplementary appendix 3). Systems were categorized based on descriptions provided in the papers and the categories were modified as needed. For systems assigned to multiple categories, we analyzed the paper under a main category based on the key functionalities being examined. All papers considered had to contain some form of an economic evaluation. We also included three refined classifications based on the economic literature: economic analysis, input cost analysis, and cost-related outcome analysis. An economic analysis includes a comparison between costs and outcomes and can be a cost-minimization analysis, cost-consequence analysis (CCA), cost-effectiveness analysis, cost-utility analysis, or costbenefit analysis. Six key components should be present in an economic analysis: perspective (eg, societal, organizational, or individual), a specified time frame, at least two alternative options for comparison, costs, outcomes, and an analysis which compares costs and outcomes for each option. Input cost analyses and cost-related outcome analyses are one-sided evaluations where only costs or cost-related outcomes are evaluated. Figure 1 provides a summary of the economic evaluation classifications. More detailed descriptions are available in online supplementary appendix 4. Synthesis After classifying the papers into the type of economic evaluation, our synthesis focused on how each evaluation component has been described in the papers and summarizing findings by HIS type. To ensure the quality of our findings, only those papers with quality scores of 8–10 were included in the synthesis. For these papers, we further investigated: cost and outcome metrics included, analytical methods used, and ways to summarize study results. Cost and outcome metrics In this review we only extracted tangible cost and outcome metrics from high quality papers. For papers that performed an economic analysis, we extracted all outcomes because they were analyzed in comparison to costs. For papers that reported only outcomes or outcomes separate from input costs, we extracted only those that were associated with a monetary value, for example, dollar savings. Arlotto and Oakes12 created three major cost categories: direct costs (one time), direct costs (ongoing), and indirect costs (ongoing). One time direct costs are associated with acquiring and implementing the system, whereas ongoing direct costs reflect costs associated with ongoing operation of the system. Ongoing indirect costs are other costs incurred in supporting operation, such as security and policy management. We used Arlotto and Oakes’ items as the basis of our mapping and organization of cost and outcome metrics, modifying the original list where needed. This extraction and mapping identified which cost and outcome metrics have been incorporated the most into economic evaluations of HIS and which items have been included least. Figure 1 Economic evaluation classification. All economic analyses are conducted over a time frame, optional for input cost analysis and cost-related outcome analysis. ‘Option’ indicates that comparison options may not be present in the analysis. Source: eHealth Observatory (http:// ehealth.uvic.ca/methodology/models/EEFramework.php). CBA, cost-benefit analysis; CCA, cost-consequence analysis; CEA, cost-effectiveness analysis; CMA, cost-minimization analysis; CUA, cost-utility analysis. Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 793 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 Analytical methods To better understand how the economic analysis was carried out, we examined each paper to identify the analytical methods used including their sources of cost and outcome data. The analysis could be based on accounting, statistical, or econometric methods using historical records, subjective estimates, or mathematical projections.8 13 If more detailed methods were mentioned in the papers, we included the names of the analytical techniques used, such as break-even point analysis and Monte Carlo simulation modeling. To deal with data and methodological uncertainties, papers could include further analysis, such as one-way sensitivity analysis.8 Summarizing study results We applied a simple vote counting method to summarize the cumulative effect of the study results reported in the individual papers. This method is similar to ones used in earlier HIS systematic reviews by Garg et al, 14 Balas et al, 15 and Lau et al. 16 First we reviewed each paper to determine if it had a positive, negative, or inconclusive effect over the time period studied. We relied on the authors’ original conclusions and did not attempt to re-interpret them in our review. Next, we summarized the effects by the main HIS type listed in the original papers. We speculated that practitioners responsible for HIS economic evaluation would often focus on a specific type of information system, such as computerized provider order entry (CPOE). Through reporting the summarized effects by HIS type, we could comment on whether a given HIS type has led to any overall economic value across a similar set of studies. RESULTS Synopsis of selected papers The search yielded a total of 5348 potential papers for consideration, of which 42 were selected for the review (figure 2). Online supplementary appendix 5 shows the general characteristics of the 42 papers.4 17–57 Twelve papers focused on costs and impacts, that is, costs and benefits or costeffectiveness.22 23 30 32 33 38 39 41 42 47 51 55 Twelve papers were on system costs,4 17–19 26 27 34 37 40 45 53 56 while 12 others examined the impact of systems on costs.20 24 25 29 31 35 36 44 48 50 52 57 Six papers looked at cost savings.21 28 45 46 49 54 In terms of setting, the majority were limited to a single practice or hospital. Seven were at an organization, such as a community health center, facility, or health department.22 27 39 43 44 45 48 One paper21 looked at an entire diabetic population in the USA. Five papers included multiple settings.30 34–36 51 Most papers had multiple data sources. The top source was some form of internal organizational data or records. Sometimes this had to be assumed when the source was not specified. Other data sources were databases, and published literature, studies, or reports. For example, Furukawa et al29 extracted their EMR implementation data from the HIMSS Analytics Database, and Mekhjian et al37 obtained pre- and post-implementation length of stay and cost data from a centralized information warehouse. Six papers mentioned expert opinion19 22 30 31 51 52 and eight used data generated from the system.25 34 45 47 48 52 54 57 The majority of papers scored >5/10 on quality assessment. Eleven papers achieved perfect overall scores.24 25 33– 36 41 43 47 52 53 A summary of quality assessment results is available in online supplementary appendix 2. As only papers with a quality score of 8/10 or higher were included in the subsequent analysis, the remainder of the results reported here only include these 33 papers. See online supplementary appendix 6 for an alphabetical reference list of these papers. Summary of evaluation components Using a checklist approach (see online supplementary appendix 7), we determined that 12 papers22 32–34 38 40–42 47 51 52 55 included all six components and therefore categorized them as economic analyses. Of these, six were cost-benefit analyses,22 32 38 40 51 52 two were cost-effectiveness analyses,33 55 two were costconsequence analyses,42 47 one was a cost-minimization analysis,34 and one was a cost-utility analysis.41 Sixteen papers addressed costrelated outcomes18 20 21 24 25 29 35–37 44–46 48 53 54 57 and five looked solely at input costs.4 17 26 27 43 Table 1 shows a summary of the first three components and how they were reported in the papers. Almost all economic evaluations in the papers were conducted from the organizational perspective. Where possible, we determined the time frame based on data collection but often had to rely on overall study time frames reported. Eighteen papers reported time frames of between 1 and 5 years.4 17 24–27 33–36 40 43 46–48 52 53 57 The clearest comparisons among alternatives or options were withwithout system or pre-post intervention. These were in 1618 22 24 25 33–36 41 43 44 46 48 52 53 55 and nine papers,26 37 38 40 42 45–47 57 respectively. Input costs We extracted 277 input cost metrics from 18 papers.4 17 22 26 27 31– 34 38 40–43 47 51 52 55 The majority came from input cost analyses and cost-benefit analyses. Table 2 summarizes the final metric categories and sub-categories, and the papers which contained these metrics. One time direct costs were frequently mentioned. A total of 59 metrics from 14 papers fell into the ‘application development and deployment’ category. These included costs for design and development, implementation, IT support, and clinical support. Other categories with many metrics were ‘hardware and peripherals,’ ‘network, peripherals, supplies, and equipment,’ and ‘packaged and customized software.’ From Arlotto and Oakes’ original list, we did not find any reported cost metrics which mapped to ‘transition costs,’ or ‘office accommodations, furniture, and related items.’ Figure 2 Paper selection flow. 794 Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 Fifteen papers analyzed ongoing direct costs.4 17 22 26 27 32– 34 38 40 42 43 51 52 55 The categories with the most metrics were ‘salaries for IT and assigned end-user staff’ and ‘software maintenance, subscriptions, and upgrades.’ We added categories for ‘maintenance’ and ‘hardware and equipment’ as these were in a few papers. Ongoing indirect costs appeared less frequently in the papers we reviewed. Of the original categories by Arlotto and Oakes, we had two metrics on ‘IT policy management,’ and ‘workload shift.’ We also found that most papers17 22 26 27 32– 34 38 40 42 43 47 51 52 joined all direct and indirect costs together or included other cost items, so we added categories for other, overall, and total costs. Outcomes We extracted 195 outcome metrics from 27 papers. Table 3 summarizes the final metric categories and sub-categories, and the papers which contained these metrics. The majority of the metrics came from papers classified as a cost-benefit analysis or cost-related outcome analysis. The category ‘resource utilization’ had the most metrics. The majority of these were for savings associated with laboratory tests or medications. Nine papers had outcomes on medication use and management.25 32 35 36 44–46 51 52 For example, several metrics were reported by McMullin et al36 on drug cost savings related to the use of an electronic prescribing system with integrated decision support. Metrics from this paper included ‘per member per month expenditure on new prescriptions and their refills,’ which was broken down into expected cost, actual cost, and savings. These were counted as separate metrics because they had distinct values. ‘Labor savings’ were frequently examined in the evaluations and we added a category for ‘healthcare service provision savings’ which included metrics on clinical outcomes. These metrics measured the impact from a clinical perspective. For example, four papers measured the impact on adverse drug events (ADE).22 32 52 55 Byrne et al22 looked at the value of the computerized patient record system in VistA by examining reduced costs for preventable ADEs caused by inpatient and outpatient medications. CCA and cost-benefit analysis papers included metrics for revenues. From the original subcategories by Arlotto and Oakes, we found metrics for ‘increases in cases/patient days/outpatient volumes’ but none for ‘reduction in days in accounts receivable’ or ‘reduction of administrative denials.’ As well, we mapped only one metric to ‘reduction in cost of ownership of existing technologies’ and ‘capital expense reduction.’ It appeared outcomes for these areas were not addressed in the papers we reviewed. Similar to the input cost metrics categorization, we added a total costs/savings category since many papers joined all outcomes into a general figure. Analytical methods Table 4 shows the analytical methods used, including their data sources. Cost accounting was the most common method used and included incremental cost-effectiveness ratios (ICER), return on investment, payback, net present value, net benefit, operating margin,22 32 33 38 40 42 47 51 52 55 least cost, average cost, and cost savings.4 17 18 21 25 34–37 44 46 48 53 54 Most papers used historical costs to estimate future outcomes for a specified time period.20 25 27 29 35–37 41 42 44–46 48 53 54 57 The methods of estimation included linear and logistic regression,24 25 45 46 53 scenarios,4 26 43 and a general/linear model.35 36 37 Many adjusted for inflation,4 17 18 22 27 33 41 42 51 52 discounting,17 32 33 40 41 43 52 55 and amortization/depreciation.33 34 40 51 Some papers used statistical methods to test for differences among groups that included the t test,29 37 47 53 57 χ2 test,57 and analysis of variance.37 46 Several papers used econometric or financial modeling methods based on panel regression,20 Table 1 Economic evaluation components: perspective, time frame, and comparison Input cost analysis Economic analysis CMA CCA CEA CUA CBA Cost-related outcome analysis Perspective Organization 4 1 2 2 1 6 13 Society 1 1 1 Individual 1 Payer 1 Time frame <6 months 1 6–11 months 1 3 1–5 years 5 1 1 1 2 8 6–10 years 1 1 3 3 >10 years 1 Several points 1 Alternatives for comparison With-without HIS 1 1 2 1 2 9 Pre-post implementation 1 2 2 4 Types of IT 1 2 Levels of IT 1 1 2 Different systems 1 Different points in time 1 Not reported 1 Counts represent the number of papers for each item. See online supplementary appendix 8 for this table with all paper references. CBA, cost-benefit analysis; CCA, cost-consequence analysis; CEA, cost-effectiveness analysis; CMA, cost-minimization analysis; CUA, cost-utility analysis; HIS, health information system; IT, information technology. Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 795 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 parametric cost analysis,27 41 stochastic frontier analysis,29 and simulation (eg, Markov or Monte Carlo).21 24 40 41 Findings by HIS type To make sense of the findings, we summarize the 33 high quality papers listed in online supplementary appendix 12 by the type of HIS, economic analysis, valuation of costs and outcomes, comparison methods, and key results. When grouped by HIS type there were seven papers on primary care EMR systems, six on CPOE systems, five on medication management systems, five on immunization information systems, four on institutional information systems, three on disease management systems, two on clinical documentation systems, and one on health information exchange (HIE) networks. Of these 33 papers, 23 (69.7%) reported positive economic results, eight (24.2%) were inconclusive (including one paper that only reported best case costs), and two (6.1%) were negative. The overall findings by HIS type are summarized below. Primary care EMR systems Of the seven papers reviewed, six were on pre/post or with/ without EMR implementation comparison26 38 47 42 52 53 and one looked at the impact of an EMR on combination drug cost savings.54 Five of the seven papers reported positive economic results over different time periods that ranged from 6 months to 8 years. For instance, Wang et al52 analyzed the financial benefits of implementing EMR systems in an ambulatory care clinic over a 5-year period and reported a net benefit of $86 400 per provider discounted at 5% in 2002 US dollars. For their ambulatory surgery clinic, Patil et al42 reported an average cost-saving of US Table 2 Economic evaluation components: input cost metrics Input cost metrics Input cost analysis Economic analysis Total CMA CCA CEA CUA CBA metrics Direct costs—one time 168 Hardware and peripherals (general hardware, specific types of hardware, infrastructure) 5 1 1 1 1 4 18 Network, peripherals, supplies, and equipment (telecommunications, supplies) 3 1 1 1 23 Packaged and customized software (software, software upgrade, software license) 3 1 1 1 6 13 Application development and deployment (design and development, implementation, IT support, clinical support) 5 2 1 1 5 59 Configuration management (interface, system) 2 1 5 Initial data collection and conversion or archival data (extraction, entry, transcription) 5 1 1 1 11 End-user project management 1 1 1 6 Initial user training 2 1 1 4 10 Workforce adjustment for affected employees 2 2 Project planning, contract negotiation, and procurement (vendor, personnel, approvals, feasibility) 1 11 1 7 Transition costs (costs of running parallel systems or conversion of legacy systems) 0 Facilities upgrades, including site preparation and renovation (capital, utilities) 2 1 1 5 Office accommodations, furniture, and related items 0 Quality assurance and post-implementation reviews 1 1 Other initial costs 1 1 1 3 Overall initial cost 1 25 Direct costs—ongoing 91 Software maintenance, subscriptions, and upgrades (data refresh, software upgrade, software fees, interfaces, other upgrades) 3 1 2 4 18 Maintenance (system maintenance, travel, administration) 2 1 1 2 7 Hardware and equipment (hardware replacement, equipment, and supplies) 3 4 9 Salaries for IT and assigned end-user staff (technical, administrative, clinical) 4 1 1 1 23 Ongoing training 2 1 1 1 6 Facilities rental and utilities 1 2 Professional services 1 1 1 3 Reviews and audits 1 1 Other ongoing (operating, other-not specified, reporting) 2 1 1 11 Overall ongoing (annual, total, per record/encounter) 2 1 4 11 Indirect costs—ongoing 2 Security 0 Privacy 0 IT policy management 1 1 Help desk 0 Workload shift 1 1 Total costs 2 2 1 1 16 Counts represent the number of papers including the metric (note: some papers had multiple metrics mapping to the same category). See online supplementary appendix 9 for this table with all paper references. CBA, cost-benefit analysis; CCA, cost-consequence analysis; CEA, cost-effectiveness analysis; CMA, cost-minimization analysis; CUA, cost-utility analysis; HIS, health information system; IT, information technology. 796 Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 $3.09 per encounter that translated to US$184 627 per provider over 4 years given a startup EMR cost of US$10 329 per provider. Computerized physician order entry systems Of the six papers reviewed, five were pre/post or with/without CPOE comparisons,37 40 45 55 57 and one was based on a single CPOE system over time.32 Three papers reported positive economic results, two were inconclusive, and one was negative. For instance, Wu et al55 found the ICER was US$12 700 per ADE adverted but the effect was dependent on ADE rate, physician and system cost, and the system’s ability to reduce ADEs. Ohsfeldt et al40 compared the impact of CPOE on existing information systems in 74 hospitals grouped as rural, ruralreferral, urban, and critical access hospitals with increasing levels of IT infrastructures. The authors found that after CPOE implementation operating margins over 2 years showed a decrease for all hospital types but a deficit for rural and critical access hospitals. They concluded that CPOE cost was likely not financially feasible for small hospitals. Medication management systems Of the five papers reviewed, all were on pre/post or with/ without implementation comparison of electronic prescribing and alert systems.25 35 36 44 46 All five papers reported positive economic results over time periods that ranged from 3 months to 2 years. For instance, Fischer et al25 examined 17.4 million Table 3 Economic evaluation components: outcome metrics Outcome metrics Economic analysis Cost-related outcome analysis Total CMA CCA CEA CUA CBA metrics Tangible 195 Revenues 11 13 Increase in cases/patient days/outpatient volumes 1 1 2 Increase in reimbursement rates through contractual changes 1 1 Reduction in days in accounts receivable 0 Reduction of administrative denials 0 Billings 2 3 Payer mix 1 4 Labor savings (reduction of FTEs) or productivity improvements 34 Documentation savings 11 2 Data entry savings 1 2 4 Report generation savings 1 2 Time savings 1 1 10 Chart retrieval/return 2 1 6 Staffing costs/savings 1 1 1 4 Efficiency savings 12 Provider 2 4 Cost inefficiency 1 1 Transaction 3 6 Communication 1 1 Supply savings 2 1 4 Resource utilization 91 Laboratory 1 4 1 31 Radiology 4 1 7 Medications 3 7 43 Procedures 2 10 Reduction in cost of ownership of existing technologies 1 1 Capital expense reduction relating to facilities, medical equipment, or elimination of other technologies 1 1 Healthcare service provision savings 1 24 Clinical outcomes 1 23 Patient safety events 1 3 ADEs avoided 1 3 9 Disease prevention/management 1 2 7 Total costs/savings 3 5 25 Annual 3 7 Net benefit 2 4 ICER 1 1 3 Other 1 2 Counts represent the number of papers including the metric (note: some papers had multiple metrics mapping to the same category). See online supplementary appendix 10 for this table with all paper references. ADE, adverse drug event; CBA, cost-benefit analysis; CCA, cost-consequence analysis; CEA, cost-effectiveness analysis; CMA, cost-minimization analysis; CUA, cost-utility analysis; FTE, full-time equivalent; ICER, incremental cost-effectiveness ratio. Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 797 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 prescriptions filled in one state over an 18-month period for differences in generic versus brand name prescriptions before and after the implementation of e-prescribing with formulary decision support. They estimated a 3.3% increase in the use of lower cost medications, which translated to potential annual savings of $845 000 per 100 000 patients. Immunization information systems Of the five papers reviewed, two compared paper systems with immunization information systems,18 34 two compared different immunization system applications,17 27 and one compared electronic with hybrid systems.43 Three papers reported positive economic results, with the other two being inconclusive. All five papers emphasized that the value of their systems was dependent on provider participation rates and the success of large scale system rollout. For example, Bartlett et al17 used 5 years of historical records from 24 state/city level health departments and practices to estimate the average cost per patient to achieve the immunization goal of 95% participation in 8 years. They found the cost varied from US$0.09 to US$10.30 depending on the extent of provider participation and the number of patient records present. Institutional information systems All four papers reviewed were focused on the value of major clinical information system components such as laboratory, radiology, and pharmacy CPOE with or without decision support, medication administration record (MAR), and clinical/nursing documentation. Of these four papers, three reported positive economic results and one negative. The negative paper is by Furukawa et al29 and compared three stages of EHR implementation with increasing levels of sophistication in the medical/surgical units of 326 hospitals over a 10-year period. Hospitals with a clinical data repository and all three systems (laboratory, radiology, and pharmacy) were defined as stage 1; those with MAR and nursing documentation systems were stage 2; and those that included CPOE and clinical decision support (CDS) were stage 3. By comparing the mean inefficiency ratios of hospitals at the three stages, the authors concluded stages 1 and 2, nursing documentation, MAR, and CDS were associated with significantly higher inefficiency in costs. Disease management systems The three papers in this category4 21 41 were on diabetes management regarding which they reported positive economic results. O’Reilly et al41 reported a reduced relative risk of complication, a better ICER, and an improved quality-adjusted life year based on 1 year of CDS supported treatments. Adler-Milstein et al4 found diabetes registries to be the least expensive of the five management approaches examined for small and medium size practices, whereas EHR with CDS added were most economical for large practices. Bu et al21 compared 5 types of diabetes management systems in terms of their deployment level from 20% to 100% over a 10-year period. All forms of IT enabled disease management were expected to improve the health of patients with diabetes and reduce costs. Clinical documentation systems The two papers reviewed33 48 reported inconclusive economic results. For example, Kopach et al33 compared traditional versus automated discharge note documentation systems in an academic hospital. The outcome was estimated mean delay in documentation time for discharge note completion adjusted for discharge volume increase and note volume decrease over 4 years. They found an ICER of C$0.331 per day that was Table 4 Economic evaluation components: analytical methods Analytical methods Input cost analysis Economic analysis CMA CCA CEA CUA CBA Cost-related outcome analysis Data sources Historical/published costs used in comparison 1 1 1 13 Historical/estimated costs for pre/post comparison 1 1 2 Historical/estimated costs used to project costs/benefits 3 1 2 4 3 Accounting Adjusted for inflation, discounting, cost amortization/depreciation 4 1 1 2 1 4 2 Included sensitivity analysis, scenarios 3 1 2 1 4 Used ICER, ROI, payback, NPV, net benefit, operating margin 2 2 1 6 Used least cost, average cost, savings 4 1 14 Used QALY ratios 1 Statistics Regression, logistic regression 5 General linear/mixed model 3 Group differences, eg, χ2 , t test, ANOVA, proportions 1 10 Econometric/operations research Panel regression, fixed effect 1 Parametric cost analysis 1 1 Stochastic frontier analysis, including alternatives for checks 1 Simulation with Markov, Monte Carlo models 1 1 2 Used mean inefficiency score, regression coefficient 2 Counts represent the number of papers including each method. See online supplementary appendix 11 for this table with all paper references. Descriptions of analytical methods are included in online supplementary appendix 4. ANOVA, analysis of variance; CBA, cost-benefit analysis; CCA, cost-consequence analysis; CEA, cost-effectiveness analysis; CMA, cost-minimization analysis; CUA, cost-utility analysis; ICER, incremental cost-effectiveness ratio; NPV, net present value; QALY, quality-adjusted life year; ROI, return on investment. 798 Bassi J, et al. J Am Med Inform Assoc 2013;20:792–801. doi:10.1136/amiajnl-2012-001422 Review Downloaded from https://academic.oup.com/jamia/article/20/4/792/2909351 by guest on 14 October 2021 deemed expensive but cost-effective, and was dependent on physician utilization volume and length of stay. Health information exchange Walker et al51 examined three scenarios with four levels of HIE rollout in the USA over a 10-year period: level 1 did not use IT to share information; level 2 used transmission of nonstandardized data that cannot be electronically manipulated; level 3 used transmission of structured messages containing nonstandardized data; and level 4 used transmission of structured messages with standardized and coded data. The authors reported positive cost–benefit ratios that ranged from US$21.6 billion at level 2 to US$77.8 billion at level 4 adoption with reduced duplicates and improved utilization in laboratory and radiology tests. DISCUSSION Components of economic evaluation All 42 papers studied in this review included some type of economic evaluation but we only examined the 33 highest quality papers for our analyses of the original results. We divided the papers into input cost analysis, economic analysis, and cost-related outcome analysis studies. O’Reilly et al58 used a similar approach in their review to split economic evaluations into full and partial evaluations. In our review, we found 12 economic analyses, most of which were cost-benefit analyses. The other types of economic analyses were rarely seen. The 16 cost-related outcomes analysis papers focused mostly on cost savings or cost changes after implementation. However, without knowing the initial costs of implementing the system, it is difficult to determine whether the savings are worth the investment. Bu et al21 noted that ‘high-benefit approaches may or may not be associated with high costs of implementation.’ Still, these evaluations provide useful data to aid in the justification for a new system because they demonstrate value. Five papers looked only at input costs. Again, solely examining the cost of a system does not provide enough information to make a decision. One alternative may cost less but may also produce fewer benefits than a more expensive choice.4 Rantz et al48 found that cost increases (outcome) were likely due to the cost of the technology, maintaining and supporting the technology, and on-going staff training. Therefore, evaluations focusing on cost provide important insight into where costs may be incurred for system implementation. Current knowledge on value for money of HIS In this review, 23/33 or 69.7% of the papers reported positive findings demonstrating value for money but in specific healthcare settings with specific HIS types. For instance, 13/15 or 86.7% of papers on primary care EMR, medication management, and disease management systems had positive findings. CPOE, immunization, and documentation had mixed findings in more than one paper. The paper on HIE was positive but it assumed a national rollout over a 10-year period which may be difficult to achieve given the complex nature of the system involved. Our findings are similar to those of O’Reilly et al58 which showed positive results with primary care EMR and prescribing systems, and variable effects with CPOE and decision support systems. Other reviews have also shown positive results in specific aspects of medication management59–64 and chronic disease management such as preventive care and reminders.15 65–67 Based on these findings, we speculate that currently there is some evidence that HIS can improve care in areas such as primary care, medication management, and disease management, but they can be expensive to implement and maintain over time and require a great deal of effort to ensure the adoption is done ‘properly’ to reap the benefits.16 As the HIS become more complicated as in CPOE and clinical documentation and immunization systems where there are multiple objectives, target audiences, and performance variations, there may be diminishing returns in having to manage the increased complexities, needed coordination, and stakeholder expectations. In these instances there is still need for more research to determine how best to design and implement these systems in ways that can help, not hinder, the clinical work. Overall, we should point out that, while these findings will become outdated as more papers are published, one can repeat our review methods for new studies to update the findings. Also we believe the analytical methods and metrics reported in this review already cover the breadth of approaches likely to be expected in future studies. Guidance for future economic evaluations This review provides a glimpse of what has been published on HIS economic evaluation to date in terms of the types of economic analysis carried out, the HIS areas covered, and making sense of the findings. From this review it is clear that a high quality economic evaluation should be explicit with its six key components: having a perspective, options for comparison, time frame, costs, outcomes, and comparison of costs and outcomes for each option. The 33 high quality papers included in this review can serve as a reference source for those planning to conduct HIS economic evaluation studies. In particular, the methodological aspects listed in tables 1–4 and the evaluation results by HIS type listed in online supplementary appendix 12 can be used to build on what has already been done to expand the evidence base. Lastly, the inclusion of the six key components and checking against the 10 quality assessment criteria used in this review can improve consistency in the design, reporting, and comparison of HIS economic evaluation studies. Limitations and caveats There are several limitations to this review. First, our search and data extraction were done by one reviewer so there could have been selection bias. Second, our review only looked at tangible costs and outcomes in the papers and the level of detail reported varied widely. Third, our categorizations for economic evaluation and HIS types were done through consensus but alternative classifications exist. Fourth, the organizations and systems in the papers were vastly different even when the same type of system was described. Caution is needed in applying the results to a specific setting and system. Finally, the larger number of positive studies in our review could be due to publication bias, where papers with positive findings were more likely to be published. We are grateful for the extensive feedback received on this manuscript from the journal reviewers. While appreciating the encyclopedic nature of our content, they offered several important caveats for readers. First, the inclusion of papers with estimated costs such as those based on expert opinions rather than actual costs could be called into question. Second, papers based on avoidance costs are less convincing than those with tangible, measurable outputs. Third, papers that rely entirely on modeling and projections are hypothetical in nature and may generate unrealistic expectations. Rather than simply removing these papers from the review, we decided to leave them in the fina